Full Text Available Leishmaniasis is the second largest parasitic killer disease caused by the protozoan parasite Leishmania, transmitted by the bite of sand flies. It's endemic in the eastern India with 165.4 million populations at risk with the current drug regimen. Three forms of leishmaniasis exist in which cutaneous is the most common form caused by Leishmania major. Trypanothione Reductase (TryR, a flavoprotein oxidoreductase, unique to thiol redox system, is considered as a potential target for chemotherapy for trypanosomatids infection. It is involved in the NADPH dependent reduction of Trypanothione disulphide to Trypanothione. Similarly, is Tryparedoxin Peroxidase (Txnpx, for detoxification of peroxides, an event pivotal for survival of Leishmania in two disparate biological environment. Fe-S plays a major role in regulating redox balance. To check for the closeness between human homologs of these proteins, we have carried the molecular clock analysis followed by molecular modeling of 3D structure of this protein, enabling us to design and test the novel drug like molecules. Molecular clock analysis suggests that human homologs of TryR i.e. Glutathione Reductase and Txnpx respectively are highly diverged in phylogenetic tree, thus, they serve as good candidates for chemotherapy of leishmaniasis. Furthermore, we have done the homology modeling of TryR using template of same protein from Leishmania infantum (PDB ID: 2JK6. This was done using Modeller 9.18 and the resultant models were validated. To inhibit this target, molecular docking was done with various screened inhibitors in which we found Taxifolin acts as common inhibitors for both TryR and Txnpx. We constructed the protein-protein interaction network for the proteins that are involved in the redox metabolism from various Interaction databases and the network was statistically analysed. Keywords: Trypanothione Reductase, Tryparedoxin Peroxidase, L.major, Homology modeling, Molecular clock analysis
Betania Barros Cota
Full Text Available The fungus Lentinus strigosus (Pegler 1983 (Polyporaceae, basidiomycete was selected in a screen for inhibitory activity on Trypanosoma cruzi trypanothione reductase (TR. The crude extract of L. strigosus was able to completely inhibit TR at 20 µg/ml. Two triquinane sesquiterpenoids (dihydrohypnophilin and hypnophilin, in addition to two panepoxydol derivatives (neopanepoxydol and panepoxydone, were isolated using a bioassay-guided fractionation protocol. Hypnophilin and panepoxydone displayed IC50 values of 0.8 and 38.9 µM in the TR assay, respectively, while the other two compounds were inactive. The activity of hypnophilin was confirmed in a secondary assay with the intracellular amastigote forms of T. cruzi, in which it presented an IC50 value of 2.5 µ M. Quantitative flow cytometry experiments demonstrated that hypnophilin at 4 µM also reduced the proliferation of human peripheral blood monocluear cells (PBMC stimulated with phytohemaglutinin, without any apparent interference on the viability of lymphocytes and monocytes. As the host immune response plays a pivotal role in the adverse events triggered by antigen release during treatment with trypanocidal drugs, the ability of hypnophilin to kill the intracellular forms of T. cruzi while modulating human PBMC proliferation suggests that this terpenoid may be a promising prototype for the development of new chemotherapeutical agents for Chagas disease.
Zani Carlos L
Full Text Available The enzyme trypanothione reductase is a recognised drug target in trypanosomatids and has been used in the search of new compounds with potential activity against diseases such as leishmaniasis, Chagas disease and African trypanosomiasis. 8-Methoxy-naphtho [2,3-b] thiophen-4,9-quinone was selected in a screening of natural and synthetic compounds using an in vitro assay with the recombinant enzyme from Trypanosoma cruzi. Its mode of inhibition fits a non-competitive model with respect to the substrate (trypanothione and to the co-factor (NADPH, with Ki-values of 5 and 3.6 µM, respectively. When tested against human glutathione reductase, this compound did not display any significant inhibition at 100 µM, indicating a good selectivity against the parasite enzyme.
Beltran-Hortelano, Ivan; Perez-Silanes, Silvia; Galiano, Silvia
It has been over a century since Carlos Chagas discovered the Trypanosoma cruzi (T. cruzi) as the causative agent of Chagas disease (CD), a neglected tropical disease with several socioeconomic, epidemiological and human health repercussions. Currently, there are only two commercialized drugs to treat CD in acute phase, nifurtimox and benznidazol, with several adverse side effects. Thus, new orally available and safe drugs for this parasitic infection are urgently required. One strategy of great importance in new drug discovery programmes is based on the search of molecules enabling to interfere with enzymes involved in T. cruzi metabolism. This review will focus on two of the most promising targets for the therapy of CD: trypanothione reductase (TR) and the iron-containing superoxide dismutase (Fe- SOD), which protect the parasite against oxidative damage by reactive oxygen species. A brief comparison of the function, mechanism of action and the active sites between T. cruzi TR and Fe-SOD with their analogues enzymes in human, glutathione reductase (GR) and the corresponding SODs, will be discussed. This review will also summarize the recent development and structure-activity relationships of novel compounds reported for their ability to selectively inhibit these targets, aiming to define molecular bases in the search for new effective treatment of CD. Copyright© Bentham Science Publishers; For any queries, please email at email@example.com.
Uliassi, Elisa; Fiorani, Giulia; Krauth-Siegel, R Luise; Bergamini, Christian; Fato, Romana; Bianchini, Giulia; Carlos Menéndez, J; Molina, Maria Teresa; López-Montero, Eulogio; Falchi, Federico; Cavalli, Andrea; Gul, Sheraz; Kuzikov, Maria; Ellinger, Bernhard; Witt, Gesa; Moraes, Carolina B; Freitas-Junior, Lucio H; Borsari, Chiara; Costi, Maria Paola; Bolognesi, Maria Laura
Crassiflorone is a natural product with anti-mycobacterial and anti-gonorrhoeal properties, isolated from the stem bark of the African ebony tree Diospyros crassiflora. We noticed that its pentacyclic core possesses structural resemblance to the quinone-coumarin hybrid 3, which we reported to exhibit a dual-targeted inhibitory profile towards Trypanosoma brucei glyceraldehyde-3-phosphate dehydrogenase (TbGAPDH) and Trypanosoma cruzi trypanothione reductase (TcTR). Following this basic idea, we synthesized a small library of crassiflorone derivatives 15-23 and investigated their potential as anti-trypanosomatid agents. 19 is the only compound of the series showing a balanced dual profile at 10 μM (% inhibition TbGAPDH = 64% and % inhibition TcTR = 65%). In phenotypic assay, the most active compounds were 18 and 21, which at 5 μM inhibited Tb bloodstream-form growth by 29% and 38%, respectively. Notably, all the newly synthesized compounds at 10 μM did not affect viability and the status of mitochondria in human A549 and 786-O cell lines, respectively. However, further optimization that addresses metabolic liabilities including solubility, as well as cytochromes P450 (CYP1A2, CYP2C9, CYP2C19, and CYP2D6) inhibition, is required before this class of natural product-derived compounds can be further progressed. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Argüelles, Alonso J; Cordell, Geoffrey A; Maruenda, Helena
Trypanothione reductase (TryR) is a key enzyme in the metabolism of Trypanosoma cruzi, the parasite responsible for Chagas disease. The available repertoire of TryR inhibitors relies heavily on synthetic substrates of limited structural diversity, and less on plant-derived natural products. In this study, a molecular docking procedure using a Lamarckian Genetic Algorithm was implemented to examine the protein-ligand binding interactions of strong in vitro inhibitors for which no X-ray data is available. In addition, a small, skeletally diverse, set of natural alkaloids was assessed computationally against T. cruzi TryR in search of new scaffolds for lead development. The preferential binding mode (low number of clusters, high cluster population), together with the deduced binding interactions were used to discriminate among the virtual inhibitors. This study confirms the prior in vitro data and proposes quebrachamine, cephalotaxine, cryptolepine, (22S,25S)-tomatidine, (22R,25S)-solanidine, and (22R,25R)-solasodine as new alkaloid scaffold leads in the search for more potent and selective TryR inhibitors.
de Lucio, Héctor; Gamo, Ana María; Ruiz-Santaquiteria, Marta; de Castro, Sonia; Sánchez-Murcia, Pedro A; Toro, Miguel A; Gutiérrez, Kilian Jesús; Gago, Federico; Jiménez-Ruiz, Antonio; Camarasa, María-José; Velázquez, Sonsoles
The objective of the current study was to enhance the proteolytic stability of peptide-based inhibitors that target critical protein-protein interactions at the dimerization interface of Leishmania infantum trypanothione reductase (Li-TryR) using a backbone modification strategy. To achieve this goal we carried out the synthesis, proteolytic stability studies and biological evaluation of a small library of α/β 3 -peptide foldamers of different length (from 9-mers to 13-mers) and different α→β substitution patterns related to prototype linear α-peptides. We show that several 13-residue α/β 3 -peptide foldamers retain inhibitory potency against the enzyme (in both activity and dimerization assays) while they are far less susceptible to proteolytic degradation than an analogous α-peptide. The strong dependence of the binding affinities for Li-TryR on the length of the α,β-peptides is supported by theoretical calculations on conformational ensembles of the resulting complexes. The conjugation of the most proteolytically stable α/β-peptide with oligoarginines results in a molecule with potent activity against L. infantum promastigotes and amastigotes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Full Text Available A high throughput screen for compounds that induce TRAIL-mediated apoptosis identified ML100 as an active chemical probe, which potentiated TRAIL activity in prostate carcinoma PPC-1 and melanoma MDA-MB-435 cells. Follow-up in silico modeling and profiling in cell-based assays allowed us to identify NSC130362, pharmacophore analog of ML100 that induced 65-95% cytotoxicity in cancer cells and did not affect the viability of human primary hepatocytes. In agreement with the activation of the apoptotic pathway, both ML100 and NSC130362 synergistically with TRAIL induced caspase-3/7 activity in MDA-MB-435 cells. Subsequent affinity chromatography and inhibition studies convincingly demonstrated that glutathione reductase (GSR, a key component of the oxidative stress response, is a target of NSC130362. In accordance with the role of GSR in the TRAIL pathway, GSR gene silencing potentiated TRAIL activity in MDA-MB-435 cells but not in human hepatocytes. Inhibition of GSR activity resulted in the induction of oxidative stress, as was evidenced by an increase in intracellular reactive oxygen species (ROS and peroxidation of mitochondrial membrane after NSC130362 treatment in MDA-MB-435 cells but not in human hepatocytes. The antioxidant reduced glutathione (GSH fully protected MDA-MB-435 cells from cell lysis induced by NSC130362 and TRAIL, thereby further confirming the interplay between GSR and TRAIL. As a consequence of activation of oxidative stress, combined treatment of different oxidative stress inducers and NSC130362 promoted cell death in a variety of cancer cells but not in hepatocytes in cell-based assays and in in vivo, in a mouse tumor xenograft model.
Hartmann, Ana Paula; de Carvalho, Marcelo Rodrigues; Bernardes, Lilian Sibelle Campos; Moraes, Milena Hoehr de; de Melo, Eduardo Borges; Lopes, Carla Duque; Steindel, Mario; da Silva, João Santana; Carvalho, Ivone
Two series of diaryl-tetrahydrofuran and -furan were synthesised and screened for anti-trypanosomal activity against trypomastigote and amastigote forms of Trypanosoma cruzi, the causative agent of Chagas disease. Based on evidence that modification of a natural product may result in a more effective drug than the natural product itself, and using known neolignan inhibitors veraguensin 1 and grandisin 2 as templates to synthesise simpler analogues, remarkable anti-trypanosomal activity and selectivity were found for 3,5-dimethoxylated diaryl-furan 5c and 2,4-dimethoxylated diaryl-tetrahydrofuran 4e analogues with EC 50 0.01 μM and EC 50 0.75 μM, respectively, the former being 260-fold more potent than veraguensin 1 and 150-fold better than benznidazole, the current available drugs for Chagas disease treatment. The ability of the most potent anti-trypanosomal compounds to penetrate LLC-MK2 cells infected with T. cruzi amastigotes parasite was tested, which revealed 4e and 5e analogues as the most effective, causing no damage to mammalian cells. In particular, the majority of the derivatives were non-toxic against mice spleen cells. 2D-QSAR studies show the rigid central core and the position of dimethoxy-aryl substituents dramatically affect the anti-trypanosomal activity. The mode of action of the most active anti-trypanosomal derivatives was investigated by exploring the anti-oxidant functions of Trypanothione reductase (TR). As a result, diarylfuran series displayed the strongest inhibition, highlighting compounds 5d-e (IC 50 19.2 and 17.7 μM) and 5f-g (IC 50 8.9 and 7.4 μM), respectively, with similar or 2-fold higher than the reference inhibitor clomipramine (IC 50 15.2 μM). Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...
A simple, high-throughput method to detect Plasmodium falciparum single nucleotide polymorphisms in the dihydrofolate reductase, dihydropteroate synthase, and P. falciparum chloroquine resistance transporter genes using polymerase chain reaction- and enzyme-linked immunosorbent
Alifrangis, Michael; Enosse, Sonia; Pearce, Richard
. However, to be a practical tool in the surveillance of drug resistance, simpler methods for high-throughput haplotyping are warranted. Here we describe a quick and simple technique that detects dhfr, dhps, and Pfcrt SNPs using polymerase chain reaction (PCR)- and enzyme-linked immunosorbent assay (ELISA...... the SNPs of dhfr, dhps, and Pfcrt with high specificity. The SSOP-ELISA compared well with a standard PCR-restriction fragment length polymorphism procedure, and gave identical positive results in more than 90% of the P. falciparum slide-positive samples tested. The SSOP-ELISA of all dhfr, dhps, or Pfcrt...
Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA
Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.
Kuiper, V.; Kampherbeek, B. J.; Wieland, M. J.; de Boer, G.; ten Berge, G. F.; Boers, J.; Jager, R.; van de Peut, T.; Peijster, J. J. M.; Slot, E.; Steenbrink, S. W. H. K.; Teepen, T. F.; van Veen, A. H. V.
Maskless electron beam lithography, or electron beam direct write, has been around for a long time in the semiconductor industry and was pioneered from the mid-1960s onwards. This technique has been used for mask writing applications as well as device engineering and in some cases chip manufacturing. However because of its relatively low throughput compared to optical lithography, electron beam lithography has never been the mainstream lithography technology. To extend optical lithography double patterning, as a bridging technology, and EUV lithography are currently explored. Irrespective of the technical viability of both approaches, one thing seems clear. They will be expensive . MAPPER Lithography is developing a maskless lithography technology based on massively-parallel electron-beam writing with high speed optical data transport for switching the electron beams. In this way optical columns can be made with a throughput of 10-20 wafers per hour. By clustering several of these columns together high throughputs can be realized in a small footprint. This enables a highly cost-competitive alternative to double patterning and EUV alternatives. In 2007 MAPPER obtained its Proof of Lithography milestone by exposing in its Demonstrator 45 nm half pitch structures with 110 electron beams in parallel, where all the beams where individually switched on and off . In 2008 MAPPER has taken a next step in its development by building several tools. A new platform has been designed and built which contains a 300 mm wafer stage, a wafer handler and an electron beam column with 110 parallel electron beams. This manuscript describes the first patterning results with this 300 mm platform.
Waage, Johannes Eichler
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...
Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.
Ligterink, Wilco; Hilhorst, Henk W.M.
High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very
Morgan, Mark; Grimshaw, Andrew
While it is true that the modern computer is many orders of magnitude faster than that of yesteryear; this tremendous growth in CPU clock rates is now over. Unfortunately, however, the growth in demand for computational power has not abated; whereas researchers a decade ago could simply wait for computers to get faster, today the only solution to the growing need for more powerful computational resource lies in the exploitation of parallelism. Software parallelization falls generally into two broad categories--"true parallel" and high-throughput computing. This chapter focuses on the latter of these two types of parallelism. With high-throughput computing, users can run many copies of their software at the same time across many different computers. This technique for achieving parallelism is powerful in its ability to provide high degrees of parallelism, yet simple in its conceptual implementation. This chapter covers various patterns of high-throughput computing usage and the skills and techniques necessary to take full advantage of them. By utilizing numerous examples and sample codes and scripts, we hope to provide the reader not only with a deeper understanding of the principles behind high-throughput computing, but also with a set of tools and references that will prove invaluable as she explores software parallelism with her own software applications and research.
Gesley, M.; Puri, R.
A high throughput spectral image microscopy system is configured for rapid detection of rare cells in large populations. To overcome flow cytometry rates and use of fluorophore tags, a system architecture integrates sample mechanical handling, signal processors, and optics in a non-confocal version of light absorption and scattering spectroscopic microscopy. Spectral images with native contrast do not require the use of exogeneous stain to render cells with submicron resolution. Structure may be characterized without restriction to cell clusters of differentiation.
Michael I Miller
Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
Ligterink, Wilco; Hilhorst, Henk W M
High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very informative as it lacks information about start, rate, and uniformity of germination, which are highly indicative of such traits as dormancy, stress tolerance, and seed longevity. The calculation of cumulative germination curves requires information about germination percentage at various time points. We developed the GERMINATOR package: a simple, highly cost-efficient, and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The GERMINATOR package contains three modules: (I) design of experimental setup with various options to replicate and randomize samples; (II) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (III) curve fitting of cumulative germination data and the extraction, recap, and visualization of the various germination parameters. GERMINATOR is a freely available package that allows the monitoring and analysis of several thousands of germination tests, several times a day by a single person.
Environmental chemicals can elicit endocrine disruption by altering steroid hormone biosynthesis and metabolism (steroidogenesis) causing adverse reproductive and developmental effects. Historically, a lack of assays resulted in few chemicals having been evaluated for effects on steroidogenesis. The steroidogenic pathway is a series of hydroxylation and dehydrogenation steps carried out by CYP450 and hydroxysteroid dehydrogenase enzymes, yet the only enzyme in the pathway for which a high-throughput screening (HTS) assay has been developed is aromatase (CYP19A1), responsible for the aromatization of androgens to estrogens. Recently, the ToxCast HTS program adapted the OECD validated H295R steroidogenesis assay using human adrenocortical carcinoma cells into a high-throughput model to quantitatively assess the concentration-dependent (0.003-100 µM) effects of chemicals on 10 steroid hormones including progestagens, androgens, estrogens and glucocorticoids. These results, in combination with two CYP19A1 inhibition assays, comprise a large dataset amenable to clustering approaches supporting the identification and characterization of putative mechanisms of action (pMOA) for steroidogenesis disruption. In total, 514 chemicals were tested in all CYP19A1 and steroidogenesis assays. 216 chemicals were identified as CYP19A1 inhibitors in at least one CYP19A1 assay. 208 of these chemicals also altered hormone levels in the H295R assay, suggesting 96% sensitivity in the
Lu, Guoxin [Iowa State Univ., Ames, IA (United States)
High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different
Li, Xianqiang; Jiang, Xin; Yaoi, Takuro
Transcription factors are a group of proteins that modulate the expression of genes involved in many biological processes, such as cell growth and differentiation. Alterations in transcription factor function are associated with many human diseases, and therefore these proteins are attractive potential drug targets. A key issue in the development of such therapeutics is the generation of effective tools that can be used for high throughput discovery of the critical transcription factors involved in human diseases, and the measurement of their activities in a variety of disease or compound-treated samples. Here, a number of innovative arrays and 96-well format assays for profiling and measuring the activities of transcription factors will be discussed.
Shukla, Abhinav A; Rameez, Shahid; Wolfe, Leslie S; Oien, Nathan
The ability to conduct multiple experiments in parallel significantly reduces the time that it takes to develop a manufacturing process for a biopharmaceutical. This is particularly significant before clinical entry, because process development and manufacturing are on the "critical path" for a drug candidate to enter clinical development. High-throughput process development (HTPD) methodologies can be similarly impactful during late-stage development, both for developing the final commercial process as well as for process characterization and scale-down validation activities that form a key component of the licensure filing package. This review examines the current state of the art for HTPD methodologies as they apply to cell culture, downstream purification, and analytical techniques. In addition, we provide a vision of how HTPD activities across all of these spaces can integrate to create a rapid process development engine that can accelerate biopharmaceutical drug development. Graphical Abstract.
Waage, Johannes Eichler
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......, focusing on oft encountered problems in data processing, such as quality assurance, mapping, normalization, visualization, and interpretation. Presented in the second part are scientific endeavors representing solutions to problems of two sub-genres of next generation sequencing. For the first flavor, RNA-sequencing...
Stokes, David L; Ubarretxena-Belandia, Iban; Gonen, Tamir; Engel, Andreas
Membrane proteins play a tremendously important role in cell physiology and serve as a target for an increasing number of drugs. Structural information is key to understanding their function and for developing new strategies for combating disease. However, the complex physical chemistry associated with membrane proteins has made them more difficult to study than their soluble cousins. Electron crystallography has historically been a successful method for solving membrane protein structures and has the advantage of providing a native lipid environment for these proteins. Specifically, when membrane proteins form two-dimensional arrays within a lipid bilayer, electron microscopy can be used to collect images and diffraction and the corresponding data can be combined to produce a three-dimensional reconstruction, which under favorable conditions can extend to atomic resolution. Like X-ray crystallography, the quality of the structures are very much dependent on the order and size of the crystals. However, unlike X-ray crystallography, high-throughput methods for screening crystallization trials for electron crystallography are not in general use. In this chapter, we describe two alternative methods for high-throughput screening of membrane protein crystallization within the lipid bilayer. The first method relies on the conventional use of dialysis for removing detergent and thus reconstituting the bilayer; an array of dialysis wells in the standard 96-well format allows the use of a liquid-handling robot and greatly increases throughput. The second method relies on titration of cyclodextrin as a chelating agent for detergent; a specialized pipetting robot has been designed not only to add cyclodextrin in a systematic way, but to use light scattering to monitor the reconstitution process. In addition, the use of liquid-handling robots for making negatively stained grids and methods for automatically imaging samples in the electron microscope are described.
Barrett, Tanya; Troup, Dennis B.; Wilhite, Stephen E.; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F.; Soboleva, Alexandra; Tomashevsky, Maxim; Marshall, Kimberly A.; Phillippy, Katherine H.; Sherman, Patti M.; Muertter, Rolf N.; Edgar, Ron
The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest public repository for high-throughput gene expression data. Additionally, GEO hosts other categories of high-throughput functional genomic data, including those that examine genome copy number variations, chromatin structure, methylation status and transcription factor binding. These data are generated by the research community using high-throughput technologies like microarrays and, m...
Sharma, Punita; Ando, D. Michael; Daub, Aaron; Kaye, Julia A.; Finkbeiner, Steven
Despite years of incremental progress in our understanding of diseases such as Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD), and amyotrophic lateral sclerosis (ALS), there are still no disease-modifying therapeutics. The discrepancy between the number of lead compounds and approved drugs may partially be a result of the methods used to generate the leads and highlights the need for new technology to obtain more detailed and physiologically relevant information on cellular processes in normal and diseased states. Our high-throughput screening (HTS) system in a primary neuron model can help address this unmet need. HTS allows scientists to assay thousands of conditions in a short period of time which can reveal completely new aspects of biology and identify potential therapeutics in the span of a few months when conventional methods could take years or fail all together. HTS in primary neurons combines the advantages of HTS with the biological relevance of intact, fully differentiated neurons which can capture the critical cellular events or homeostatic states that make neurons uniquely susceptible to disease-associated proteins. We detail methodologies of our primary neuron HTS assay workflow from sample preparation to data reporting. We also discuss our adaptation of our HTS system into high-content screening (HCS), a type of HTS that uses multichannel fluorescence images to capture biological events in situ, and is uniquely suited to study dynamical processes in living cells. PMID:22341232
Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now over 53,000 proteins structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact. PMID:19765976
Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.
Pusey, Marc L.; Forsythe, Elizabeth; Achari, Aniruddha
We have shown that by covalently modifying a subpopulation, less than or equal to 1%, of a macromolecule with a fluorescent probe, the labeled material will add to a growing crystal as a microheterogeneous growth unit. Labeling procedures can be readily incorporated into the final stages of purification, and the presence of the probe at low concentrations does not affect the X-ray data quality or the crystallization behavior. The presence of the trace fluorescent label gives a number of advantages when used with high throughput crystallizations. The covalently attached probe will concentrate in the crystal relative to the solution, and under fluorescent illumination crystals show up as bright objects against a dark background. Non-protein structures, such as salt crystals, will not incorporate the probe and will not show up under fluorescent illumination. Brightly fluorescent crystals are readily found against less bright precipitated phases, which under white light illumination may obscure the crystals. Automated image analysis to find crystals should be greatly facilitated, without having to first define crystallization drop boundaries as the protein or protein structures is all that shows up. Fluorescence intensity is a faster search parameter, whether visually or by automated methods, than looking for crystalline features. We are now testing the use of high fluorescence intensity regions, in the absence of clear crystalline features or "hits", as a means for determining potential lead conditions. A working hypothesis is that kinetics leading to non-structured phases may overwhelm and trap more slowly formed ordered assemblies, which subsequently show up as regions of brighter fluorescence intensity. Preliminary experiments with test proteins have resulted in the extraction of a number of crystallization conditions from screening outcomes based solely on the presence of bright fluorescent regions. Subsequent experiments will test this approach using a wider
Wieland, M. J.; de Boer, G.; ten Berge, G. F.; Jager, R.; van de Peut, T.; Peijster, J. J. M.; Slot, E.; Steenbrink, S. W. H. K.; Teepen, T. F.; van Veen, A. H. V.; Kampherbeek, B. J.
Maskless electron beam lithography, or electron beam direct write, has been around for a long time in the semiconductor industry and was pioneered from the mid-1960s onwards. This technique has been used for mask writing applications as well as device engineering and in some cases chip manufacturing. However because of its relatively low throughput compared to optical lithography, electron beam lithography has never been the mainstream lithography technology. To extend optical lithography double patterning, as a bridging technology, and EUV lithography are currently explored. Irrespective of the technical viability of both approaches, one thing seems clear. They will be expensive . MAPPER Lithography is developing a maskless lithography technology based on massively-parallel electron-beam writing with high speed optical data transport for switching the electron beams. In this way optical columns can be made with a throughput of 10-20 wafers per hour. By clustering several of these columns together high throughputs can be realized in a small footprint. This enables a highly cost-competitive alternative to double patterning and EUV alternatives. In 2007 MAPPER obtained its Proof of Lithography milestone by exposing in its Demonstrator 45 nm half pitch structures with 110 electron beams in parallel, where all the beams where individually switched on and off . In 2008 MAPPER has taken a next step in its development by building several tools. The objective of building these tools is to involve semiconductor companies to be able to verify tool performance in their own environment. To enable this, the tools will have a 300 mm wafer stage in addition to a 110-beam optics column. First exposures at 45 nm half pitch resolution have been performed and analyzed. On the same wafer it is observed that all beams print and based on analysis of 11 beams the CD for the different patterns is within 2.2 nm from target and the CD uniformity for the different patterns is better
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
National Aeronautics and Space Administration — Busek is developing a high throughput nominal 100-W Hall Effect Thruster. This device is well sized for spacecraft ranging in size from several tens of kilograms to...
National Aeronautics and Space Administration — Busek Co. Inc. proposes to develop a high throughput, nominal 100 W Hall Effect Thruster (HET). This HET will be sized for small spacecraft (< 180 kg), including...
Mao, Samuel S.
It usually takes more than 10 years for a new material from initial research to its first commercial application. Therefore, accelerating the pace of discovery of new materials is critical to tackling challenges in areas ranging from clean energy to national security. As discovery of new materials has not kept pace with the product design cycles in many sectors of industry, there is a pressing need to develop and utilize high throughput screening and discovery technologies for the growth and characterization of new materials. This article presents two distinctive types of high throughput thin film material growth approaches, along with a number of high throughput characterization techniques, established in the author's group. These approaches include a second-generation "discrete" combinatorial semiconductor discovery technology that enables the creation of arrays of individually separated thin film semiconductor materials of different compositions, and a "continuous" high throughput thin film material screening technology that enables the realization of ternary alloy libraries with continuously varying elemental ratios.
de Boer, Jan; van Blitterswijk, Clemens
This complete, yet concise, guide introduces you to the rapidly developing field of high throughput screening of biomaterials: materiomics. Bringing together the key concepts and methodologies used to determine biomaterial properties, you will understand the adaptation and application of materomics
Full Text Available Since the first successful story was reported in the middle of 1990s, combinatorial materials science has attracted more and more attentions in the materials community. In the past two decades, a great amount of effort has been made to develop combinatorial high-throughput approaches for materials research. However, few high-throughput mechanical characterization methods and tools were reported. To date, a number of micro-scale mechanical characterization tools have been developed, which provided a basis for combinatorial high-throughput mechanical characterization. Many existing micro-mechanical testing apparatuses can be pertinently modified for high-throughput characterization. For example, automated scanning nanoindentation is used for measuring the hardness and elastic modulus of diffusion multiple alloy samples, and cantilever beam arrays are used to parallelly characterize the thermal mechanical behavior of thin films with wide composition gradients. The interpretation of micro-mechanical testing data from thin films and micro-scale samples is most critical and challenging, as the mechanical properties of their bulk counterparts cannot be intuitively extrapolated due to the well-known size and microstructure dependence. Nevertheless, high-throughput mechanical characterization data from combinatorial micro-scale samples still reflect the dependence trend of the mechanical properties on compositions and microstructure, which facilitates the understanding of intrinsic materials behavior and the fast screening of bulk mechanical properties. After the promising compositions and microstructure are pinned down, bulk samples can be prepared to measure the accurate properties and verify the combinatorial high-throughput characterization results. By developing combinatorial high-throughput mechanical characterization methods and tools, in combination with high-throughput synthesis, the structural materials research would be promoted by
Zhang, Ying; Mallapragada, Surya K; Narasimhan, Balaji
The dissolution behavior of polystyrene (PS) in biodiesel was studied by developing a novel high throughput approach based on Fourier-transform infrared (FTIR) microscopy. A multiwell device for high throughput dissolution testing was fabricated using a photolithographic rapid prototyping method. The dissolution of PS films in each well was tracked by following the characteristic IR band of PS and the effect of PS molecular weight and temperature on the dissolution rate was simultaneously investigated. The results were validated with conventional gravimetric methods. The high throughput method can be extended to evaluate the dissolution profiles of a large number of samples, or to simultaneously investigate the effect of variables such as polydispersity, crystallinity, and mixed solvents. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Morgan, R E; Westwood, N J
High throughput technologies continue to develop in response to the challenges set by the genome projects. This article discusses how the techniques of both high throughput screening (HTS) and synthesis can influence research in parasitology. Examples of the use of targeted and phenotype-based HTS using unbiased compound collections are provided. The important issue of identifying the protein target(s) of bioactive compounds is discussed from the synthetic chemist's perspective. This article concludes by reviewing recent examples of successful target identification studies in parasitology.
Full Text Available The workflow of a high throughput screening setup for the rapid identification ofnew and improved sensor materials is presented. The polyol method was applied to preparenanoparticular metal oxides as base materials, which were functionalised by surface doping.Using multi-electrode substrates and high throughput impedance spectroscopy (HT-IS awide range of materials could be screened in a short time. Applying HT-IS in search of newselective gas sensing materials a NO2-tolerant NO sensing material with reducedsensitivities towards other test gases was identified based on iridium doped zinc oxide.Analogous behaviour was observed for iridium doped indium oxide.
This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...
Geertsma, Eric R.; Poolman, Bert
We developed a generic method for high-throughput cloning in bacteria that are less amenable to conventional DNA manipulations. The method involves ligation-independent cloning in an intermediary Escherichia coli vector, which is rapidly converted via vector-backbone exchange (VBEx) into an
de Jong, R.N.; Daniëls, M.; Kaptein, R.; Folkers, G.E.
Structural and functional genomics initiatives significantly improved cloning methods over the past few years. Although recombinational cloning is highly efficient, its costs urged us to search for an alternative high throughput (HTP) cloning method. We implemented a modified Enzyme Free Cloning
Fiers, M.W.E.J.; Burgt, van der A.; Datema, E.; Groot, de J.C.W.; Ham, van R.C.H.J.
Background - Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses
De Masi, Federico; Chiarella, P.; Wilhelm, H.
Recent advances in proteomics research underscore the increasing need for high-affinity monoclonal antibodies, which are still generated with lengthy, low-throughput antibody production techniques. Here we present a semi-automated, high-throughput method of hybridoma generation and identification...
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...
Oude Elferink, Ronald
In a recent paper by Michiels et al. an important step was made towards genuine high throughput functional genomics. The authors produced an arrayed adenoviral library containing > 120000 cDNAs isolated from human placenta. This library can be used for arrayed transduction of cell lines in
Bell Shannon M
Full Text Available Abstract Background High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course of months and years, often without the controls needed to compare directly across the dataset. Few methods are available to facilitate comparisons of high throughput metabolic data generated in batches where explicit in-group controls for normalization are lacking. Results Here we describe MIPHENO (Mutant Identification by Probabilistic High throughput-Enabled Normalization, an approach for post-hoc normalization of quantitative first-pass screening data in the absence of explicit in-group controls. This approach includes a quality control step and facilitates cross-experiment comparisons that decrease the false non-discovery rates, while maintaining the high accuracy needed to limit false positives in first-pass screening. Results from simulation show an improvement in both accuracy and false non-discovery rate over a range of population parameters (p -16 and a modest but significant (p -16 improvement in area under the receiver operator characteristic curve of 0.955 for MIPHENO vs 0.923 for a group-based statistic (z-score. Analysis of the high throughput phenotypic data from the Arabidopsis Chloroplast 2010 Project (http://www.plastid.msu.edu/ showed ~ 4-fold increase in the ability to detect previously described or expected phenotypes over the group based statistic. Conclusions Results demonstrate MIPHENO offers substantial benefit in improving the ability to detect putative mutant phenotypes from post-hoc analysis of large data sets. Additionally, it facilitates data interpretation and permits cross-dataset comparison where group-based controls are missing. MIPHENO is applicable to a wide range of high throughput screenings and the code is
Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg
maltooligosaccharides, pure starch samples including a variety of different structures with variations in the amylopectin branching pattern, amylose content and phosphate content, enzymatically modified starches and glycogen were included. Using this technique, different important structures, including amylose content......In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...... and branching degrees could be differentiated in a high throughput fashion. The screening method was validated using transgenic barley grain analysed during development and subjected to germination. Typically, extreme branching or linearity were detected less than normal starch structures. The method offers...
Full Text Available Manipulation of gene expression on a genome-wide level is one of the most important systematic tools in the post-genome era. Such manipulations have largely been enabled by expression cloning approaches using sequence-verified cDNA libraries, large-scale RNA interference libraries (shRNA or siRNA and zinc finger nuclease technologies. More recently, the CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated (Cas9-mediated gene editing technology has been described that holds great promise for future use of this technology in genomic manipulation. It was suggested that the CRISPR system has the potential to be used in high-throughput, large-scale loss of function screening. Here we discuss some of the challenges in engineering of CRISPR/Cas genomic libraries and some of the aspects that need to be addressed in order to use this technology on a high-throughput scale.
Willumsen, Niels J; Bech, Morten; Olesen, Søren-Peter
Proper function of ion channels is crucial for all living cells. Ion channel dysfunction may lead to a number of diseases, so-called channelopathies, and a number of common diseases, including epilepsy, arrhythmia, and type II diabetes, are primarily treated by drugs that modulate ion channels....... A cornerstone in current drug discovery is high throughput screening assays which allow examination of the activity of specific ion channels though only to a limited extent. Conventional patch clamp remains the sole technique with sufficiently high time resolution and sensitivity required for precise and direct...... characterization of ion channel properties. However, patch clamp is a slow, labor-intensive, and thus expensive, technique. New techniques combining the reliability and high information content of patch clamping with the virtues of high throughput philosophy are emerging and predicted to make a number of ion...
Volety, Kalpana K; Huyberechts, Guido P J
This Research Article presents a strategy to identify the optimum compositions in metal alloys with certain desired properties in a high-throughput screening environment, using a multiobjective optimization approach. In addition to the identification of the optimum compositions in a primary screening, the strategy also allows pointing to regions in the compositional space where further exploration in a secondary screening could be carried out. The strategy for the primary screening is a combination of two multiobjective optimization approaches namely Pareto optimality and desirability functions. The experimental data used in the present study have been collected from over 200 different compositions belonging to four different alloy systems. The metal alloys (comprising Fe, Ti, Al, Nb, Hf, Zr) are synthesized and screened using high-throughput technologies. The advantages of such a kind of approach compared to the limitations of the traditional and comparatively simpler approaches like ranking and calculating figures of merit are discussed.
Fraikin, Jean-Luc; Teesalu, Tambet; McKenney, Christopher M; Ruoslahti, Erkki; Cleland, Andrew N
Synthetic nanoparticles and genetically modified viruses are used in a range of applications, but high-throughput analytical tools for the physical characterization of these objects are needed. Here we present a microfluidic analyser that detects individual nanoparticles and characterizes complex, unlabelled nanoparticle suspensions. We demonstrate the detection, concentration analysis and sizing of individual synthetic nanoparticles in a multicomponent mixture with sufficient throughput to analyse 500,000 particles per second. We also report the rapid size and titre analysis of unlabelled bacteriophage T7 in both salt solution and mouse blood plasma, using just ~1 × 10⁻⁶ l of analyte. Unexpectedly, in the native blood plasma we discover a large background of naturally occurring nanoparticles with a power-law size distribution. The high-throughput detection capability, scalable fabrication and simple electronics of this instrument make it well suited for diverse applications.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
Engmark, Mikael; De Masi, Federico; Laustsen, Andreas Hougaard
Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individua...... toxins from pit vipers (Crotalidae) using the ICP Crotalidae antivenom. Due to an abundance of snake venom metalloproteinases and phospholipase A2s in the venoms used for production of the investigated antivenom, this study focuses on these toxin families.......Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individual...
Otis, Richard A.; Liu, Zi-Kui
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Nierychlo, Marta; Larsen, Poul; Jørgensen, Mads Koustrup
S rRNA gene amplicon sequencing has been developed over the past few years and is now ready to use for more comprehensive studies related to plant operation and optimization thanks to short analysis time, low cost, high throughput, and high taxonomic resolution. In this study we show how 16S r......RNA gene amplicon sequencing can be used to reveal factors of importance for the operation of full-scale nutrient removal plants related to settling problems and floc properties. Using optimized DNA extraction protocols, indexed primers and our in-house Illumina platform, we prepared multiple samples...... be correlated to the presence of the species that are regarded as “strong” and “weak” floc formers. In conclusion, 16S rRNA gene amplicon sequencing provides a high throughput approach for a rapid and cheap community profiling of activated sludge that in combination with multivariate statistics can be used...
Václavík, Jan; Melich, Radek; Pintr, Pavel; Pleštil, Jan
Affordable, long-wave infrared hyperspectral imaging calls for use of an uncooled FPA with high-throughput optics. This paper describes the design of the optical part of a stationary hyperspectral imager in a spectral range of 7-14 um with a field of view of 20°×10°. The imager employs a push-broom method made by a scanning mirror. High throughput and a demand for simplicity and rigidity led to a fully refractive design with highly aspheric surfaces and off-axis positioning of the detector array. The design was optimized to exploit the machinability of infrared materials by the SPDT method and a simple assemblage.
The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.
Jones, Neil Christopher
High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...
Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.
Goecks, Jeremy; Eberhard, Carl; Too, Tomithy; Nekrutenko, Anton; Taylor, James
Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput
Bush, Alex; Sollmann, Rahel; Wilting, Andreas
Understandably, given the fast pace of biodiversity loss, there is much interest in using Earth observation technology to track biodiversity, ecosystem functions and ecosystem services. However, because most biodiversity is invisible to Earth observation, indicators based on Earth observation could...... be misleading and reduce the effectiveness of nature conservation and even unintentionally decrease conservation effort. We describe an approach that combines automated recording devices, high-throughput DNA sequencing and modern ecological modelling to extract much more of the information available in Earth...
Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.
Cooper, Khershed P.
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
Kurokawa, Masaomi; Ying, Bei-Wen
Bacterial growth is a central concept in the development of modern microbial physiology, as well as in the investigation of cellular dynamics at the systems level. Recent studies have reported correlations between bacterial growth and genome-wide events, such as genome reduction and transcriptome reorganization. Correctly analyzing bacterial growth is crucial for understanding the growth-dependent coordination of gene functions and cellular components. Accordingly, the precise quantitative evaluation of bacterial growth in a high-throughput manner is required. Emerging technological developments offer new experimental tools that allow updates of the methods used for studying bacterial growth. The protocol introduced here employs a microplate reader with a highly optimized experimental procedure for the reproducible and precise evaluation of bacterial growth. This protocol was used to evaluate the growth of several previously described Escherichia coli strains. The main steps of the protocol are as follows: the preparation of a large number of cell stocks in small vials for repeated tests with reproducible results, the use of 96-well plates for high-throughput growth evaluation, and the manual calculation of two major parameters (i.e., maximal growth rate and population density) representing the growth dynamics. In comparison to the traditional colony-forming unit (CFU) assay, which counts the cells that are cultured in glass tubes over time on agar plates, the present method is more efficient and provides more detailed temporal records of growth changes, but has a stricter detection limit at low population densities. In summary, the described method is advantageous for the precise and reproducible high-throughput analysis of bacterial growth, which can be used to draw conceptual conclusions or to make theoretical observations.
Amin, A; Thomas, M; Bockelman, B; Letts, J; Martin, T; Pi, H; Sfiligoi, I; Wüerthwein, F; Levshina, T
Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.
Studholme, David J; Glover, Rachel H; Boonham, Neil
The new sequencing technologies are already making a big impact in academic research on medically important microbes and may soon revolutionize diagnostics, epidemiology, and infection control. Plant pathology also stands to gain from exploiting these opportunities. This manuscript reviews some applications of these high-throughput sequencing methods that are relevant to phytopathology, with emphasis on the associated computational and bioinformatics challenges and their solutions. Second-generation sequencing technologies have recently been exploited in genomics of both prokaryotic and eukaryotic plant pathogens. They are also proving to be useful in diagnostics, especially with respect to viruses. Copyright © 2011 by Annual Reviews. All rights reserved.
Santosh Kumar Upadhyay
Full Text Available Clustered regularly interspaced short palindromic repeats (CRISPR and CRISPR-associated protein (Cas system facilitates targeted genome editing in organisms. Despite high demand of this system, finding a reliable tool for the determination of specific target sites in large genomic data remained challenging. Here, we report SSFinder, a python script to perform high throughput detection of specific target sites in large nucleotide datasets. The SSFinder is a user-friendly tool, compatible with Windows, Mac OS, and Linux operating systems, and freely available online.
Thrash, Adam; Arick, Mark; Peterson, Daniel G
The quality of data generated by high-throughput DNA sequencing tools must be rapidly assessed in order to determine how useful the data may be in making biological discoveries; higher quality data leads to more confident results and conclusions. Due to the ever-increasing size of data sets and the importance of rapid quality assessment, tools that analyze sequencing data should quickly produce easily interpretable graphics. Quack addresses these issues by generating information-dense visualizations from FASTQ files at a speed far surpassing other publicly available quality assurance tools in a manner independent of sequencing technology. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Amin, A.; Bockelman, B.; Letts, J.; Levshina, T.; Martin, T.; Pi, H.; Sfiligoi, I.; Thomas, M.; Wüerthwein, F.
Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.
Charles University in Prague Faculty of Pharmacy in Hradec Králové Department of Analytical Chemistry Candidate: Lýdia Mihalčíková Supervisor: Warunya Boonjob, Ph.D. Consultant: Doc. PharmDr. Hana Sklenářová, Ph.D. Work title: High throughput method for determination of caffeine in coffee drinks Caffeine is a xanthine alkaloid acting like a stimulant of heart and central nervous system. Quantification of caffeine in coffee drinks is significant to show how much of caffeine was in each cup whi...
Fedorov, V. B.
An analysis is made of the current state and problems as well as prospects of the development of optical logic elements and threshold light amplifiers for high-throughput computing. An analysis is made of the specific case of a variant of an optical processor capable of 1013-1014 arithmetic operations per second under conditions of pipelined processing of two-dimensional arrays of multidigit binary operands. The basic requirements which must be satisfied by parameters and characteristics of optical logic elements in such a processor are identified.
Full Text Available This paper reviews high-throughput screening enzyme assays developed in our laboratory over the last ten years. These enzyme assays were initially developed for the purpose of discovering catalytic antibodies by screening cell culture supernatants, but have proved generally useful for testing enzyme activities. Examples include TLC-based screening using acridone-labeled substrates, fluorogenic assays based on the β-elimination of umbelliferone or nitrophenol, and indirect assays such as the back-titration method with adrenaline and the copper-calcein fluorescence assay for aminoacids.
Huber, Wolfgang; Carey, Vincent J.; Gentleman, Robert; Anders, Simon; Carlson, Marc; Carvalho, Benilton S.; Bravo, Hector Corrada; Davis, Sean; Gatto, Laurent; Girke, Thomas; Gottardo, Raphael; Hahne, Florian; Hansen, Kasper D.; Irizarry, Rafael A.; Lawrence, Michael; Love, Michael I.; MacDonald, James; Obenchain, Valerie; Oleś, Andrzej K.; Pagès, Hervé; Reyes, Alejandro; Shannon, Paul; Smyth, Gordon K.; Tenenbaum, Dan; Waldron, Levi; Morgan, Martin
Bioconductor is an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology. The project aims to enable interdisciplinary research, collaboration and rapid development of scientific software. Based on the statistical programming language R, Bioconductor comprises 934 interoperable packages contributed by a large, diverse community of scientists. Packages cover a range of bioinformatic and statistical applications. They undergo formal initial review and continuous automated testing. We present an overview for prospective users and contributors. PMID:25633503
Barsdell, Ben; Price, Daniel; Cranmer, Miles; Garsden, Hugh; Dowell, Jayce
Bifrost is a stream processing framework that eases the development of high-throughput processing CPU/GPU pipelines. It is designed for digital signal processing (DSP) applications within radio astronomy. Bifrost uses a flexible ring buffer implementation that allows different signal processing blocks to be connected to form a pipeline. Each block may be assigned to a CPU core, and the ring buffers are used to transport data to and from blocks. Processing blocks may be run on either the CPU or GPU, and the ring buffer will take care of memory copies between the CPU and GPU spaces.
The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.
Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sales, A. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.
Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham
Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
Annala, M J; Parker, B C; Zhang, W; Nykter, M
Fusion genes are hybrid genes that combine parts of two or more original genes. They can form as a result of chromosomal rearrangements or abnormal transcription, and have been shown to act as drivers of malignant transformation and progression in many human cancers. The biological significance of fusion genes together with their specificity to cancer cells has made them into excellent targets for molecular therapy. Fusion genes are also used as diagnostic and prognostic markers to confirm cancer diagnosis and monitor response to molecular therapies. High-throughput sequencing has enabled the systematic discovery of fusion genes in a wide variety of cancer types. In this review, we describe the history of fusion genes in cancer and the ways in which fusion genes form and affect cellular function. We also describe computational methodologies for detecting fusion genes from high-throughput sequencing experiments, and the most common sources of error that lead to false discovery of fusion genes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hu, Junqiang; Gondarenko, Alexander A; Dang, Alex P; Bashour, Keenan T; O'Connor, Roddy S; Lee, Sunwoo; Liapis, Anastasia; Ghassemi, Saba; Milone, Michael C; Sheetz, Michael P; Dustin, Michael L; Kam, Lance C; Hone, James C
We herein demonstrate the first 96-well plate platform to screen effects of micro- and nanotopographies on cell growth and proliferation. Existing high-throughput platforms test a limited number of factors and are not fully compatible with multiple types of testing and assays. This platform is compatible with high-throughput liquid handling, high-resolution imaging, and all multiwell plate-based instrumentation. We use the platform to screen for topographies and drug-topography combinations that have short- and long-term effects on T cell activation and proliferation. We coated nanofabricated "trench-grid" surfaces with anti-CD3 and anti-CD28 antibodies to activate T cells and assayed for interleukin 2 (IL-2) cytokine production. IL-2 secretion was enhanced at 200 nm trench width and >2.3 μm grating pitch; however, the secretion was suppressed at 100 nm width and grid trench was further amplified with the addition of blebbistatin to reduce contractility. The 200 nm grid pattern was found to triple the number of T cells in long-term expansion, a result with direct clinical applicability in adoptive immunotherapy.
Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.
Havrilla, George J.; Miller, Thomasin C.
Micro-x-ray fluorescence (MXRF) is a useful characterization tool for high-throughput screening of combinatorial libraries. Due to the increasing threat of use of chemical warfare (CW) agents both in military actions and against civilians by terrorist extremists, there is a strong push to improve existing methods and develop means for the detection of a broad spectrum of CW agents in a minimal amount of time to increase national security. This paper describes a combinatorial high-throughput screening technique for CW receptor discovery to aid in sensor development. MXRF can screen materials for elemental composition at the mesoscale level (tens to hundreds of micrometers). The key aspect of this work is the use of commercial MXRF instrumentation coupled with the inherent heteroatom elements within the target molecules of the combinatorial reaction to provide rapid and specific identification of lead species. The method is demonstrated by screening an 11-mer oligopeptide library for selective binding of the degradation products of the nerve agent VX. The identified oligopeptides can be used as selective molecular receptors for sensor development. The MXRF screening method is nondestructive, requires minimal sample preparation or special tags for analysis, and the screening time depends on the desired sensitivity
Park, S-J; Saito-Adachi, M; Komiyama, Y; Nakai, K
Remarkable advances in high-throughput sequencing technologies have fundamentally changed our understanding of the genetic and epigenetic molecular bases underlying human health and diseases. As these technologies continue to revolutionize molecular biology leading to fresh perspectives, it is imperative to thoroughly consider the enormous excitement surrounding the technologies by highlighting the characteristics of platforms and their global trends as well as potential benefits and limitations. To date, with a variety of platforms, the technologies provide an impressive range of applications, including sequencing of whole genomes and transcriptomes, identifying of genome modifications, and profiling of protein interactions. Because these applications produce a flood of data, simultaneous development of bioinformatics tools is required to efficiently deal with the big data and to comprehensively analyze them. This review covers the major achievements and performances of the high-throughput sequencing and further summarizes the characteristics of their applications along with introducing applicable bioinformatics tools. Moreover, a step-by-step procedure for a practical transcriptome analysis is described employing an analytical pipeline. Clinical perspectives with special consideration to human oral health and diseases are also covered. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.
Loskyll, Jonas; Stoewe, Klaus; Maier, Wilhelm F.
We review the state of the art and explain the need for better SO2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO2 to SO3. High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT) can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis) spectrometry was selected instead as a reliable analysis method of monitoring the SO2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations.
Jonas Loskyll, Klaus Stoewe and Wilhelm F Maier
Full Text Available We review the state of the art and explain the need for better SO2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO2 to SO3. High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis spectrometry was selected instead as a reliable analysis method of monitoring the SO2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations.
Reichelt, Wieland N; Kaineder, Andreas; Brillmann, Markus; Neutsch, Lukas; Taschauer, Alexander; Lohninger, Hans; Herwig, Christoph
The expression of pharmaceutical relevant proteins in Escherichia coli frequently triggers inclusion body (IB) formation caused by protein aggregation. In the scientific literature, substantial effort has been devoted to the quantification of IB size. However, particle-based methods used up to this point to analyze the physical properties of representative numbers of IBs lack sensitivity and/or orthogonal verification. Using high pressure freezing and automated freeze substitution for transmission electron microscopy (TEM) the cytosolic inclusion body structure was preserved within the cells. TEM imaging in combination with manual grey scale image segmentation allowed the quantification of relative areas covered by the inclusion body within the cytosol. As a high throughput method nano particle tracking analysis (NTA) enables one to derive the diameter of inclusion bodies in cell homogenate based on a measurement of the Brownian motion. The NTA analysis of fixated (glutaraldehyde) and non-fixated IBs suggests that high pressure homogenization annihilates the native physiological shape of IBs. Nevertheless, the ratio of particle counts of non-fixated and fixated samples could potentially serve as factor for particle stickiness. In this contribution, we establish image segmentation of TEM pictures as an orthogonal method to size biologic particles in the cytosol of cells. More importantly, NTA has been established as a particle-based, fast and high throughput method (1000-3000 particles), thus constituting a much more accurate and representative analysis than currently available methods. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Vázquez, Citlali; Mejia-Tlachi, Marlen; González-Chávez, Zabdi; Silva, Aketzalli; Rodríguez-Zavala, José Salud; Moreno-Sánchez, Rafael; Saavedra, Emma
Buthionine sulfoximine (BSO) induces decreased glutathione (GSH) and trypanothione [T(SH) 2 ] pools in trypanosomatids, presumably because only gamma-glutamylcysteine synthetase (γECS) is blocked. However, some BSO effects cannot be explained by exclusive γECS inhibition; therefore, its effect on the T(SH) 2 metabolism pathway in Trypanosoma cruzi was re-examined. Parasites exposed to BSO did not synthesize T(SH) 2 even when supplemented with cysteine or GSH, suggesting trypanothione synthetase (TryS) inhibition by BSO. Indeed, recombinant γECS and TryS, but not GSH synthetase, were inhibited by BSO and kinetics and docking analyses on a TcTryS 3D model suggested BSO binding at the GSH site. Furthermore, parasites overexpressing γECS and TryS showed ~ 50% decreased activities after BSO treatment. These results indicated that BSO is also an inhibitor of TryS. © 2017 Federation of European Biochemical Societies.
JUAN DIEGO MAYA
Full Text Available Proteins rich in sulfhydryl groups, such as metallothionein, are present in several strains of the parasite Trypanosoma cruzi, the etiological agent of Chagas' disease. Metallothionein-like protein concentrations ranged from 5.1 to 13.2 pmol/mg protein depending on the parasite strain and growth phase. Nifurtimox and benznidazole, used in the treatment of Chagas' disease, decreased metallothionein activity by approximately 70%. T. cruzi metallothionein was induced by ZnCl2. Metallothionein from T. cruzi was partially purified and its monobromobimane derivative showed a molecular weight of approximately 10,000 Da by SDS-PAGE analysis. The concentration of trypanothione, the major glutathione conjugate in T. cruzi, ranged from 3.8 to 10.8 nmol/mg protein, depending on the culture phase. The addition of buthionine sulfoximine to the protozoal culture considerably reduced the concentration of trypanothione and had no effect upon the metallothionein concentration. The possible contribution of metallothionein-like proteins to drug resistance in T. cruzi is discussed.
Richendrfer, Holly; Créton, Robbert
We have created a novel high-throughput imaging system for the analysis of behavior in 7-day-old zebrafish larvae in multi-lane plates. This system measures spontaneous behaviors and the response to an aversive stimulus, which is shown to the larvae via a PowerPoint presentation. The recorded images are analyzed with an ImageJ macro, which automatically splits the color channels, subtracts the background, and applies a threshold to identify individual larvae placement in the lanes. We can then import the coordinates into an Excel sheet to quantify swim speed, preference for edge or side of the lane, resting behavior, thigmotaxis, distance between larvae, and avoidance behavior. Subtle changes in behavior are easily detected using our system, making it useful for behavioral analyses after exposure to environmental toxicants or pharmaceuticals.
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh Kumar; Sarkar, Soumik
Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits. Copyright © 2015 Elsevier Ltd. All rights reserved.
The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...
Tsapatsaris, Nikolaos; Beesley, Angela M.; Weiher, Norbert; Tatton, Helen; Schroeder, Sven L. M.; Dent, Andy J.; Mosselmans, Frederick J. W.; Tromp, Moniek; Russu, Sergio; Evans, John; Harvey, Ian; Hayama, Shu
We outline and demonstrate the feasibility of high-throughput (HT) in situ XAFS for synchrotron radiation studies. An XAS data acquisition and control system for the analysis of dynamic materials libraries under control of temperature and gaseous environments has been developed. The system is compatible with the 96-well industry standard and coupled to multi-stream quadrupole mass spectrometry (QMS) analysis of reactor effluents. An automated analytical workflow generates data quickly compared to traditional individual spectrum acquisition and analyses them in quasi-real time using an HT data analysis tool based on IFFEFIT. The system was used for the automated characterization of a library of 91 catalyst precursors containing ternary combinations of Cu, Pt, and Au on γ-Al2O3, and for the in situ characterization of Au catalysts supported on Al2O3 and TiO2
Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...
Sapkota, Rumakanta; Nicolaisen, Mogens
Culture-independent studies using next generation sequencing have revolutionizedmicrobial ecology, however, oomycete ecology in soils is severely lagging behind. The aimof this study was to improve and validate standard techniques for using high throughput sequencing as a tool for studying oomycete...... agricultural fields in Denmark, and 11 samples from carrot tissue with symptoms of Pythium infection. Sequence data from the Pythium and Phytophthora mock communities showed that our strategy successfully detected all included species. Taxonomic assignments of OTUs from 26 soil sample showed that 95...... the usefulness of the method not only in soil DNA but also in a plant DNA background. In conclusion, we demonstrate a successful approach for pyrosequencing of oomycete communities using ITS1 as the barcode sequence with well-known primers for oomycete DNA amplification....
Full Text Available Efficient parallel screening of combinatorial libraries is one of the most challenging aspects of the high-throughput (HT heterogeneous catalysis workflow. Today, a number of methods have been used in HT catalyst studies, including various optical, mass-spectrometry, and gas-chromatography techniques. Of these, rapid-scanning Fourier-transform infrared (FTIR imaging is one of the fastest and most versatile screening techniques. Here, the new design of the 16-channel HT reactor is presented and test results for its accuracy and reproducibility are shown. The performance of the system was evaluated through the oxidation of CO over commercial Pd/Al2O3 and cobalt oxide nanoparticles synthesized with different reducer-reductant molar ratios, surfactant types, metal and surfactant concentrations, synthesis temperatures, and ramp rates.
List, Markus; Elnegaard, Marlene Pedersen; Schmidt, Steffen
High-throughput screening (HTS) has become an indispensable tool for the pharmaceutical industry and for biomedical research. A high degree of automation allows for experiments in the range of a few hundred up to several hundred thousand to be performed in close succession. The basis...... to be serially diluted before they can be used as assay plates. This process, however, leads to an explosion in the number of plates and samples to be tracked. Here, we present SAVANAH, the first tool to effectively manage molecular screening libraries across dilution series. It conveniently links (connects......) sample information from the library to experimental results from the assay plates. All results can be exported to the R statistical environment or piped into HiTSeekR (http://hitseekr.compbio.sdu.dk) for comprehensive follow-up analyses. In summary, SAVANAH supports the HTS community in managing...
Marcellin, Esteban; Nielsen, Lars Keld
The emergence of inexpensive, base-perfect genome editing is revolutionising biology. Modern industrial biotechnology exploits the advances in genome editing in combination with automation, analytics and data integration to build high-throughput automated strain engineering pipelines also known...... as biofoundries. Biofoundries replace the slow and inconsistent artisanal processes used to build microbial cell factories with an automated design–build–test cycle, considerably reducing the time needed to deliver commercially viable strains. Testing and hence learning remains relatively shallow, but recent...... advances in analytical chemistry promise to increase the depth of characterization possible. Analytics combined with models of cellular physiology in automated systems biology pipelines should enable deeper learning and hence a steeper pitch of the learning cycle. This review explores the progress...
Selekman, Joshua A; Qiu, Jun; Tran, Kristy; Stevens, Jason; Rosso, Victor; Simmons, Eric; Xiao, Yi; Janey, Jacob
High-throughput (HT) techniques built upon laboratory automation technology and coupled to statistical experimental design and parallel experimentation have enabled the acceleration of chemical process development across multiple industries. HT technologies are often applied to interrogate wide, often multidimensional experimental spaces to inform the design and optimization of any number of unit operations that chemical engineers use in process development. In this review, we outline the evolution of HT technology and provide a comprehensive overview of how HT automation is used throughout different industries, with a particular focus on chemical and pharmaceutical process development. In addition, we highlight the common strategies of how HT automation is incorporated into routine development activities to maximize its impact in various academic and industrial settings.
The recent advent of high throughput sequencing of nucleic acids (RNA and DNA) has vastly expanded research into the functional and structural biology of the genome of all living organisms (and even a few dead ones). With this enormous and exponential growth in biological data generation come...... equally large demands in data handling, analysis and interpretation, perhaps defining the modern challenge of the computational biologist of the post-genomic era. The first part of this thesis consists of a general introduction to the history, common terms and challenges of next generation sequencing......, focusing on oft encountered problems in data processing, such as quality assurance, mapping, normalization, visualization, and interpretation. Presented in the second part are scientific endeavors representing solutions to problems of two sub-genres of next generation sequencing. For the first flavor, RNA-sequencing...
Full Text Available Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L
Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.
Full Text Available Abstract Background Phloem-feeding insects are among the most devastating pests worldwide. They not only cause damage by feeding from the phloem, thereby depleting the plant from photo-assimilates, but also by vectoring viruses. Until now, the main way to prevent such problems is the frequent use of insecticides. Applying resistant varieties would be a more environmental friendly and sustainable solution. For this, resistant sources need to be identified first. Up to now there were no methods suitable for high throughput phenotyping of plant germplasm to identify sources of resistance towards phloem-feeding insects. Results In this paper we present a high throughput screening system to identify plants with an increased resistance against aphids. Its versatility is demonstrated using an Arabidopsis thaliana activation tag mutant line collection. This system consists of the green peach aphid Myzus persicae (Sulzer and the circulative virus Turnip yellows virus (TuYV. In an initial screening, with one plant representing one mutant line, 13 virus-free mutant lines were identified by ELISA. Using seeds produced from these lines, the putative candidates were re-evaluated and characterized, resulting in nine lines with increased resistance towards the aphid. Conclusions This M. persicae-TuYV screening system is an efficient, reliable and quick procedure to identify among thousands of mutated lines those resistant to aphids. In our study, nine mutant lines with increased resistance against the aphid were selected among 5160 mutant lines in just 5 months by one person. The system can be extended to other phloem-feeding insects and circulative viruses to identify insect resistant sources from several collections, including for example genebanks and artificially prepared mutant collections.
Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.
Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.
Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of
Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi
Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please
Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized
Cregan Perry B
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs as defined here are single base sequence changes or short insertion/deletions between or within individuals of a given species. As a result of their abundance and the availability of high throughput analysis technologies SNP markers have begun to replace other traditional markers such as restriction fragment length polymorphisms (RFLPs, amplified fragment length polymorphisms (AFLPs and simple sequence repeats (SSRs or microsatellite markers for fine mapping and association studies in several species. For SNP discovery from chromatogram data, several bioinformatics programs have to be combined to generate an analysis pipeline. Results have to be stored in a relational database to facilitate interrogation through queries or to generate data for further analyses such as determination of linkage disequilibrium and identification of common haplotypes. Although these tasks are routinely performed by several groups, an integrated open source SNP discovery pipeline that can be easily adapted by new groups interested in SNP marker development is currently unavailable. Results We developed SNP-PHAGE (SNP discovery Pipeline with additional features for identification of common haplotypes within a sequence tagged site (Haplotype Analysis and GenBank (-dbSNP submissions. This tool was applied for analyzing sequence traces from diverse soybean genotypes to discover over 10,000 SNPs. This package was developed on UNIX/Linux platform, written in Perl and uses a MySQL database. Scripts to generate a user-friendly web interface are also provided with common queries for preliminary data analysis. A machine learning tool developed by this group for increasing the efficiency of SNP discovery is integrated as a part of this package as an optional feature. The SNP-PHAGE package is being made available open source at http://bfgl.anri.barc.usda.gov/ML/snp-phage/. Conclusion SNP-PHAGE provides a bioinformatics
Giollo, Manuel; Minervini, Giovanni; Scalzotto, Marta; Leonardi, Emanuela; Ferrari, Carlo; Tosatto, Silvio C E
Over the last decade, we have witnessed an incredible growth in the amount of available genotype data due to high throughput sequencing (HTS) techniques. This information may be used to predict phenotypes of medical relevance, and pave the way towards personalized medicine. Blood phenotypes (e.g. ABO and Rh) are a purely genetic trait that has been extensively studied for decades, with currently over thirty known blood groups. Given the public availability of blood group data, it is of interest to predict these phenotypes from HTS data which may translate into more accurate blood typing in clinical practice. Here we propose BOOGIE, a fast predictor for the inference of blood groups from single nucleotide variant (SNV) databases. We focus on the prediction of thirty blood groups ranging from the well known ABO and Rh, to the less studied Junior or Diego. BOOGIE correctly predicted the blood group with 94% accuracy for the Personal Genome Project whole genome profiles where good quality SNV annotation was available. Additionally, our tool produces a high quality haplotype phase, which is of interest in the context of ethnicity-specific polymorphisms or traits. The versatility and simplicity of the analysis make it easily interpretable and allow easy extension of the protocol towards other phenotypes. BOOGIE can be downloaded from URL http://protein.bio.unipd.it/download/.
Mann, Sarah K; Czuba, Ewa; Selby, Laura I; Such, Georgina K; Johnston, Angus P R
The internalization of nanoparticles into cells is critical for effective nanoparticle mediated drug delivery. To investigate the kinetics and mechanism of internalization of nanoparticles into cells we have developed a DNA molecular sensor, termed the Specific Hybridization Internalization Probe - SHIP. Self-assembling polymeric 'pHlexi' nanoparticles were functionalized with a Fluorescent Internalization Probe (FIP) and the interactions with two different cell lines (3T3 and CEM cells) were studied. The kinetics of internalization were quantified and chemical inhibitors that inhibited energy dependent endocytosis (sodium azide), dynamin dependent endocytosis (Dyngo-4a) and macropinocytosis (5-(N-ethyl-N-isopropyl) amiloride (EIPA)) were used to study the mechanism of internalization. Nanoparticle internalization kinetics were significantly faster in 3T3 cells than CEM cells. We have shown that ~90% of the nanoparticles associated with 3T3 cells were internalized, compared to only 20% of the nanoparticles associated with CEM cells. Nanoparticle uptake was via a dynamin-dependent pathway, and the nanoparticles were trafficked to lysosomal compartments once internalized. SHIP is able to distinguish between nanoparticles that are associated on the outer cell membrane from nanoparticles that are internalized. This study demonstrates the assay can be used to probe the kinetics of nanoparticle internalization and the mechanisms by which the nanoparticles are taken up by cells. This information is fundamental for engineering more effective nanoparticle delivery systems. The SHIP assay is a simple and a high-throughput technique that could have wide application in therapeutic delivery research.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Full Text Available Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Miller, Neil A; Kingsmore, Stephen F; Farmer, Andrew; Langley, Raymond J; Mudge, Joann; Crow, John A; Gonzalez, Alvaro J; Schilkey, Faye D; Kim, Ryan J; van Velkinburgh, Jennifer; May, Gregory D; Black, C Forrest; Myers, M Kathy; Utsey, John P; Frost, Nicholas S; Sugarbaker, David J; Bueno, Raphael; Gullans, Stephen R; Baxter, Susan M; Day, Steve W; Retzel, Ernest F
High-throughput DNA sequencing has enabled systems biology to begin to address areas in health, agricultural and basic biological research. Concomitant with the opportunities is an absolute necessity to manage significant volumes of high-dimensional and inter-related data and analysis. Alpheus is an analysis pipeline, database and visualization software for use with massively parallel DNA sequencing technologies that feature multi-gigabase throughput characterized by relatively short reads, such as Illumina-Solexa (sequencing-by-synthesis), Roche-454 (pyrosequencing) and Applied Biosystem's SOLiD (sequencing-by-ligation). Alpheus enables alignment to reference sequence(s), detection of variants and enumeration of sequence abundance, including expression levels in transcriptome sequence. Alpheus is able to detect several types of variants, including non-synonymous and synonymous single nucleotide polymorphisms (SNPs), insertions/deletions (indels), premature stop codons, and splice isoforms. Variant detection is aided by the ability to filter variant calls based on consistency, expected allele frequency, sequence quality, coverage, and variant type in order to minimize false positives while maximizing the identification of true positives. Alpheus also enables comparisons of genes with variants between cases and controls or bulk segregant pools. Sequence-based differential expression comparisons can be developed, with data export to SAS JMP Genomics for statistical analysis.
Giuseppina Li Pira
Full Text Available Mapping of antigenic peptide sequences from proteins of relevant pathogens recognized by T helper (Th and by cytolytic T lymphocytes (CTL is crucial for vaccine development. In fact, mapping of T-cell epitopes provides useful information for the design of peptide-based vaccines and of peptide libraries to monitor specific cellular immunity in protected individuals, patients and vaccinees. Nevertheless, epitope mapping is a challenging task. In fact, large panels of overlapping peptides need to be tested with lymphocytes to identify the sequences that induce a T-cell response. Since numerous peptide panels from antigenic proteins are to be screened, lymphocytes available from human subjects are a limiting factor. To overcome this limitation, high throughput (HTP approaches based on miniaturization and automation of T-cell assays are needed. Here we consider the most recent applications of the HTP approach to T epitope mapping. The alternative or complementary use of in silico prediction and experimental epitope definition is discussed in the context of the recent literature. The currently used methods are described with special reference to the possibility of applying the HTP concept to make epitope mapping an easier procedure in terms of time, workload, reagents, cells and overall cost.
AVA Solar has developed a very low cost solar photovoltaic (PV) manufacturing process and has demonstrated the significant economic and commercial potential of this technology. This I & I Category 3 project provided significant assistance toward accomplishing these milestones. The original goals of this project were to design, construct and test a production prototype system, fabricate PV modules and test the module performance. The original module manufacturing costs in the proposal were estimated at $2/Watt. The objectives of this project have been exceeded. An advanced processing line was designed, fabricated and installed. Using this automated, high throughput system, high efficiency devices and fully encapsulated modules were manufactured. AVA Solar has obtained 2 rounds of private equity funding, expand to 50 people and initiated the development of a large scale factory for 100+ megawatts of annual production. Modules will be manufactured at an industry leading cost which will enable AVA Solar's modules to produce power that is cost-competitive with traditional energy resources. With low manufacturing costs and the ability to scale manufacturing, AVA Solar has been contacted by some of the largest customers in the PV industry to negotiate long-term supply contracts. The current market for PV has continued to grow at 40%+ per year for nearly a decade and is projected to reach $40-$60 Billion by 2012. Currently, a crystalline silicon raw material supply shortage is limiting growth and raising costs. Our process does not use silicon, eliminating these limitations.
Wagner, Stefanie; Lagane, Frédéric; Seguin-Orlando, Andaine; Schubert, Mikkel; Leroy, Thibault; Guichoux, Erwan; Chancerel, Emilie; Bech-Hebelstrup, Inger; Bernard, Vincent; Billard, Cyrille; Billaud, Yves; Bolliger, Matthias; Croutsch, Christophe; Čufar, Katarina; Eynaud, Frédérique; Heussner, Karl Uwe; Köninger, Joachim; Langenegger, Fabien; Leroy, Frédéric; Lima, Christine; Martinelli, Nicoletta; Momber, Garry; Billamboz, André; Nelle, Oliver; Palomo, Antoni; Piqué, Raquel; Ramstein, Marianne; Schweichel, Roswitha; Stäuble, Harald; Tegel, Willy; Terradas, Xavier; Verdin, Florence; Plomion, Christophe; Kremer, Antoine; Orlando, Ludovic
Reconstructing the colonization and demographic dynamics that gave rise to extant forests is essential to forecasts of forest responses to environmental changes. Classical approaches to map how population of trees changed through space and time largely rely on pollen distribution patterns, with only a limited number of studies exploiting DNA molecules preserved in wooden tree archaeological and subfossil remains. Here, we advance such analyses by applying high-throughput (HTS) DNA sequencing to wood archaeological and subfossil material for the first time, using a comprehensive sample of 167 European white oak waterlogged remains spanning a large temporal (from 550 to 9,800 years) and geographical range across Europe. The successful characterization of the endogenous DNA and exogenous microbial DNA of 140 (~83%) samples helped the identification of environmental conditions favouring long-term DNA preservation in wood remains, and started to unveil the first trends in the DNA decay process in wood material. Additionally, the maternally inherited chloroplast haplotypes of 21 samples from three periods of forest human-induced use (Neolithic, Bronze Age and Middle Ages) were found to be consistent with those of modern populations growing in the same geographic areas. Our work paves the way for further studies aiming at using ancient DNA preserved in wood to reconstruct the micro-evolutionary response of trees to climate change and human forest management. © 2018 John Wiley & Sons Ltd.
Sørensen, Lasse Maretty
High-throughput sequencing has the potential to answer many of the big questions in biology and medicine. It can be used to determine the ancestry of species, to chart complex ecosystems and to understand and diagnose disease. However, going from raw sequencing data to biological or medical insig....... By estimating the genotypes on a set of candidate variants obtained from both a standard mapping-based approach as well as de novo assemblies, we are able to find considerably more structural variation than previous studies...... for reconstructing transcript sequences from RNA sequencing data. The method is based on a novel sparse prior distribution over transcript abundances and is markedly more accurate than existing approaches. The second chapter describes a new method for calling genotypes from a fixed set of candidate variants....... The method queries the reads using a graph representation of the variants and hereby mitigates the reference-bias that characterise standard genotyping methods. In the last chapter, we apply this method to call the genotypes of 50 deeply sequencing parent-offspring trios from the GenomeDenmark project...
Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Yaari, Gur; Uduman, Mohamed; Kleinstein, Steven H
High-throughput immunoglobulin sequencing promises new insights into the somatic hypermutation and antigen-driven selection processes that underlie B-cell affinity maturation and adaptive immunity. The ability to estimate positive and negative selection from these sequence data has broad applications not only for understanding the immune response to pathogens, but is also critical to determining the role of somatic hypermutation in autoimmunity and B-cell cancers. Here, we develop a statistical framework for Bayesian estimation of Antigen-driven SELectIoN (BASELINe) based on the analysis of somatic mutation patterns. Our approach represents a fundamental advance over previous methods by shifting the problem from one of simply detecting selection to one of quantifying selection. Along with providing a more intuitive means to assess and visualize selection, our approach allows, for the first time, comparative analysis between groups of sequences derived from different germline V(D)J segments. Application of this approach to next-generation sequencing data demonstrates different selection pressures for memory cells of different isotypes. This framework can easily be adapted to analyze other types of DNA mutation patterns resulting from a mutator that displays hot/cold-spots, substitution preference or other intrinsic biases.
Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
Joseph D Steinmeyer
Full Text Available The complexity of neurons and neuronal circuits in brain tissue requires the genetic manipulation, labeling, and tracking of single cells. However, current methods for manipulating cells in brain tissue are limited to either bulk techniques, lacking single-cell accuracy, or manual methods that provide single-cell accuracy but at significantly lower throughputs and repeatability. Here, we demonstrate high-throughput, efficient, reliable, and combinatorial delivery of multiple genetic vectors and reagents into targeted cells within the same tissue sample with single-cell accuracy. Our system automatically loads nanoliter-scale volumes of reagents into a micropipette from multiwell plates, targets and transfects single cells in brain tissues using a robust electroporation technique, and finally preps the micropipette by automated cleaning for repeating the transfection cycle. We demonstrate multi-colored labeling of adjacent cells, both in organotypic and acute slices, and transfection of plasmids encoding different protein isoforms into neurons within the same brain tissue for analysis of their effects on linear dendritic spine density. Our platform could also be used to rapidly deliver, both ex vivo and in vivo, a variety of genetic vectors, including optogenetic and cell-type specific agents, as well as fast-acting reagents such as labeling dyes, calcium sensors, and voltage sensors to manipulate and track neuronal circuit activity at single-cell resolution.
Johnson, Marjory J.; Townsend, Jeffrey N.
File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform.
Csordas, Andrew T; Jørgensen, Anna; Wang, Jinpeng; Gruber, Emily; Gong, Qiang; Bagley, Elizabeth R; Nakamoto, Margaret A; Eisenstein, Michael; Soh, H Tom
Sandwich assays are among the most powerful tools in molecular detection. These assays use "pairs" of affinity reagents so that the detection signal is generated only when both reagents bind simultaneously to different sites on the target molecule, enabling highly sensitive and specific measurements in complex samples. Thus, the capability to efficiently screen affinity reagent pairs at a high throughput is critical. In this work, we describe an experimental strategy for screening "aptamer pairs" at a throughput of 10 6 aptamer pairs per hour-which is many orders of magnitude higher than the current state of the art. The key step in our process is the conversion of solution-phase aptamers into "aptamer particles" such that we can directly measure the simultaneous binding of multiple aptamers to a target protein based on fluorescence signals and sort individual particles harboring aptamer pairs via the fluorescence-activated cell-sorter instrument. As proof of principle, we successfully isolated a high-quality DNA aptamer pair for plasminogen activator inhibitor 1 (PAI-1). Within only two rounds of screening, we discovered DNA aptamer pairs with low-nanomolar sensitivity in dilute serum and excellent specificity with minimal off-target binding even to closely related proteins such as PAI-2.
Schumacher, J\\"orn; The ATLAS collaboration; Vandelli, Wainer
HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs (and this has been done), but it requires a non negligible effort and expert knowledge. On the other hand, message services like 0MQ have gained popularity in the HEP community. Such APIs allow to build distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on to...
Turner, G. B.; Decker, S. R.; Tucker, M. P.; Law, C.; Doeppke, C.; Sykes, R. W.; Davis, M. F.; Ziebell, A.
This was a poster displayed at the Symposium. Advances on previous high throughput screening of biomass recalcitrance methods have resulted in improved conversion and replicate precision. Changes in plate reactor metallurgy, improved preparation of control biomass, species-specific pretreatment conditions, and enzymatic hydrolysis parameters have reduced overall coefficients of variation to an average of 6% for sample replicates. These method changes have improved plate-to-plate variation of control biomass recalcitrance and improved confidence in sugar release differences between samples. With smaller errors plant researchers can have a higher degree of assurance more low recalcitrance candidates can be identified. Significant changes in plate reactor, control biomass preparation, pretreatment conditions and enzyme have significantly reduced sample and control replicate variability. Reactor plate metallurgy significantly impacts sugar release aluminum leaching into reaction during pretreatment degrades sugars and inhibits enzyme activity. Removal of starch and extractives significantly decreases control biomass variability. New enzyme formulations give more consistent and higher conversion levels, however required re-optimization for switchgrass. Pretreatment time and temperature (severity) should be adjusted to specific biomass types i.e. woody vs. herbaceous. Desalting of enzyme preps to remove low molecular weight stabilizers and improved conversion levels likely due to water activity impacts on enzyme structure and substrate interactions not attempted here due to need to continually desalt and validate precise enzyme concentration and activity.
The risk posed to human health by any of the thousands of untested anthropogenic chemicals in our environment is a function of both the potential hazard presented by the chemical, and the possibility of being exposed. Without the capacity to make quantitative, albeit uncertain, forecasts of exposure, the putative risk of adverse health effect from a chemical cannot be evaluated. We used Bayesian methodology to infer ranges of exposure intakes that are consistent with biomarkers of chemical exposures identified in urine samples from the U.S. population by the National Health and Nutrition Examination Survey (NHANES). We perform linear regression on inferred exposure for demographic subsets of NHANES demarked by age, gender, and weight using high throughput chemical descriptors gleaned from databases and chemical structure-based calculators. We find that five of these descriptors are capable of explaining roughly 50% of the variability across chemicals for all the demographic groups examined, including children aged 6-11. For the thousands of chemicals with no other source of information, this approach allows rapid and efficient prediction of average exposure intake of environmental chemicals. The methods described by this manuscript provide a highly improved methodology for HTS of human exposure to environmental chemicals. The manuscript includes a ranking of 7785 environmental chemicals with respect to potential human exposure, including most of the Tox21 in vit
GREGOIRE LE PROVOST
Full Text Available A large quantity of high quality RNA is often required in the analysis of gene expression. However, RNA extraction from samples taken from woody plants is generally complex, and represents the main limitation to study gene expression, particularly in refractory species like conifers. Standard RNA extraction protocols are available but they are highly time consuming, and not adapted to large scale extraction. Here we present a high-throughput RNA extraction protocol. This protocol was adapted to a micro-scale by modifying the classical cetyltrimethylammonium (CTAB protocol developed for pine: (i quantity of material used (100-200 mg of sample, (ii disruption of samples in microtube using a mechanical tissue disrupter, and (iii the use of SSTE buffer. One hundred samples of woody plant tissues/organs can be easily treated in two working days. An average of 15 ig of high quality RNA per sample was obtained. The RNA extracted is suitable for applications such as real time reverse transcription polymerase chain reaction, cDNA library construction or synthesis of complex targets for microarray analysis
Hemmerich, Johannes; Wiechert, Wolfgang; Oldiges, Marco
The calculation of growth rates provides basic metric for biological fitness and is standard task when using microbioreactors (MBRs) in microbial phenotyping. MBRs easily produce huge data at high frequency from parallelized high-throughput cultivations with online monitoring of biomass formation at high temporal resolution. Resulting high-density data need to be processed efficiently to accelerate experimental throughput. A MATLAB code is presented that detects the exponential growth phase from multiple microbial cultivations in an iterative procedure based on several criteria, according to the model of exponential growth. These were obtained with Corynebacterium glutamicum showing single exponential growth phase and Escherichia coli exhibiting diauxic growth with exponential phase followed by retarded growth. The procedure reproducibly detects the correct biomass data subset for growth rate calculation. The procedure was applied on data set detached from growth phenotyping of library of genome reduced C. glutamicum strains and results agree with previously reported results where manual effort was needed to pre-process the data. Thus, the automated and standardized method enables a fair comparison of strain mutants for biological fitness evaluation. The code is easily parallelized and greatly facilitates experimental throughout in biological fitness testing from strain screenings conducted with MBR systems.
Rademeyer, Paul; Carugo, Dario; Lee, Jeong Yu; Stride, Eleanor
Echogenic particles, such as microbubbles and volatile liquid micro/nano droplets, have shown considerable potential in a variety of clinical diagnostic and therapeutic applications. The accurate prediction of their response to ultrasound excitation is however extremely challenging, and this has hindered the optimisation of techniques such as quantitative ultrasound imaging and targeted drug delivery. Existing characterisation techniques, such as ultra-high speed microscopy provide important insights, but suffer from a number of limitations; most significantly difficulty in obtaining large data sets suitable for statistical analysis and the need to physically constrain the particles, thereby altering their dynamics. Here a microfluidic system is presented that overcomes these challenges to enable the measurement of single echogenic particle response to ultrasound excitation. A co-axial flow focusing device is used to direct a continuous stream of unconstrained particles through the combined focal region of an ultrasound transducer and a laser. Both the optical and acoustic scatter from individual particles are then simultaneously recorded. Calibration of the device and example results for different types of echogenic particle are presented, demonstrating a high throughput of up to 20 particles per second and the ability to resolve changes in particle radius down to 0.1 μm with an uncertainty of less than 3%.
Liu, Mochi; Shaevitz, Joshua; Leifer, Andrew
We present a high-throughput method to probe transformations from neural activity to behavior in Caenorhabditis elegans to better understand how organisms change behavioral states. We optogenetically deliver white-noise stimuli to target sensory or inter neurons while simultaneously recording the movement of a population of worms. Using all the postural movement data collected, we computationally classify stereotyped behaviors in C. elegans by clustering based on the spectral properties of the instantaneous posture. (Berman et al., 2014) Transitions between these behavioral clusters indicate discrete behavioral changes. To study the neural correlates dictating these transitions, we perform model-driven experiments and employ Linear-Nonlinear-Poisson cascades that take the white-noise stimulus as the input. The parameters of these models are fitted by reverse-correlation from our measurements. The parameterized models of behavioral transitions predict the worm's response to novel stimuli and reveal the internal computations the animal makes before carrying out behavioral decisions. Preliminary results are shown that describe the neural-behavioral transformation between neural activity in mechanosensory neurons and reversal behavior.
Elsliger, Marc André; Deacon, Ashley M; Godzik, Adam; Lesley, Scott A; Wooley, John; Wüthrich, Kurt; Wilson, Ian A
The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications.
Hyun, Woo Jin
Printed electronics is an emerging field for manufacturing electronic devices with low cost and minimal material waste for a variety of applications including displays, distributed sensing, smart packaging, and energy management. Moreover, its compatibility with roll-to-roll production formats and flexible substrates is desirable for continuous, high-throughput production of flexible electronics. Despite the promise, however, the roll-to-roll production of printed electronics is quite challenging due to web movement hindering accurate ink registration and high-fidelity printing. In this talk, I will present a promising strategy for roll-to-roll production using a novel printing process that we term SCALE (Self-aligned Capillarity-Assisted Lithography for Electronics). By utilizing capillarity of liquid inks on nano/micro-structured substrates, the SCALE process facilitates high-resolution and self-aligned patterning of electrically functional inks with greatly improved printing tolerance. I will show the fabrication of key building blocks (e.g. transistor, resistor, capacitor) for electronic circuits using the SCALE process on plastics.
Full Text Available Addressing the neural mechanisms underlying complex learned behaviors requires training animals in well-controlled tasks, an often time-consuming and labor-intensive process that can severely limit the feasibility of such studies. To overcome this constraint, we developed a fully computer-controlled general purpose system for high-throughput training of rodents. By standardizing and automating the implementation of predefined training protocols within the animal's home-cage our system dramatically reduces the efforts involved in animal training while also removing human errors and biases from the process. We deployed this system to train rats in a variety of sensorimotor tasks, achieving learning rates comparable to existing, but more laborious, methods. By incrementally and systematically increasing the difficulty of the task over weeks of training, rats were able to master motor tasks that, in complexity and structure, resemble ones used in primate studies of motor sequence learning. By enabling fully automated training of rodents in a home-cage setting this low-cost and modular system increases the utility of rodents for studying the neural underpinnings of a variety of complex behaviors.
Full Text Available Widespread existence of antimicrobial peptides (AMPs has been reported in various animals with comprehensive biological activities, which is consistent with the important roles of AMPs as the first line of host defense system. However, no big-data-based analysis on AMPs from any fish species is available. In this study, we identified 507 AMP transcripts on the basis of our previously reported genomes and transcriptomes of two representative amphibious mudskippers, Boleophthalmus pectinirostris (BP and Periophthalmus magnuspinnatus (PM. The former is predominantly aquatic with less time out of water, while the latter is primarily terrestrial with extended periods of time on land. Within these identified AMPs, 449 sequences are novel; 15 were reported in BP previously; 48 are identically overlapped between BP and PM; 94 were validated by mass spectrometry. Moreover, most AMPs presented differential tissue transcription patterns in the two mudskippers. Interestingly, we discovered two AMPs, hemoglobin β1 and amylin, with high inhibitions on Micrococcus luteus. In conclusion, our high-throughput screening strategy based on genomic and transcriptomic data opens an efficient pathway to discover new antimicrobial peptides for ongoing development of marine drugs.
Full Text Available BACKGROUND: Large efforts have recently been made to automate the sample preparation protocols for massively parallel sequencing in order to match the increasing instrument throughput. Still, the size selection through agarose gel electrophoresis separation is a labor-intensive bottleneck of these protocols. METHODOLOGY/PRINCIPAL FINDINGS: In this study a method for automatic library preparation and size selection on a liquid handling robot is presented. The method utilizes selective precipitation of certain sizes of DNA molecules on to paramagnetic beads for cleanup and selection after standard enzymatic reactions. CONCLUSIONS/SIGNIFICANCE: The method is used to generate libraries for de novo and re-sequencing on the Illumina HiSeq 2000 instrument with a throughput of 12 samples per instrument in approximately 4 hours. The resulting output data show quality scores and pass filter rates comparable to manually prepared samples. The sample size distribution can be adjusted for each application, and are suitable for all high throughput DNA processing protocols seeking to control size intervals.
Barker, Gregory A; Calzada, Joseph; Herzer, Sibylle; Rieble, Siegfried
High throughput process development offers unique approaches to explore complex process design spaces with relatively low material consumption. Batch chromatography is one technique that can be used to screen chromatographic conditions in a 96-well plate. Typical batch chromatography workflows examine variations in buffer conditions or comparison of multiple resins in a given process, as opposed to the assessment of protein loading conditions in combination with other factors. A modification to the batch chromatography paradigm is described here where experimental planning, programming, and a staggered loading approach increase the multivariate space that can be explored with a liquid handling system. The iterative batch chromatography (IBC) approach is described, which treats every well in a 96-well plate as an individual experiment, wherein protein loading conditions can be varied alongside other factors such as wash and elution buffer conditions. As all of these factors are explored in the same experiment, the interactions between them are characterized and the number of follow-up confirmatory experiments is reduced. This in turn improves statistical power and throughput. Two examples of the IBC method are shown and the impact of the load conditions are assessed in combination with the other factors explored. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Paul Daniel Phillips
Full Text Available Due to low cost, speed, and unmatched ability to explore large numbers of compounds, high throughput virtual screening and molecular docking engines have become widely utilized by computational scientists. It is generally accepted that docking engines, such as AutoDock, produce reliable qualitative results for ligand-macromolecular receptor binding, and molecular docking results are commonly reported in literature in the absence of complementary wet lab experimental data. In this investigation, three variants of the sixteen amino acid peptide, α-conotoxin MII, were docked to a homology model of the a3β2-nicotinic acetylcholine receptor. DockoMatic version 2.0 was used to perform a virtual screen of each peptide ligand to the receptor for ten docking trials consisting of 100 AutoDock cycles per trial. The results were analyzed for both variation in the calculated binding energy obtained from AutoDock, and the orientation of bound peptide within the receptor. The results show that, while no clear correlation exists between consistent ligand binding pose and the calculated binding energy, AutoDock is able to determine a consistent positioning of bound peptide in the majority of trials when at least ten trials were evaluated.
Sawetzki, Tobias; Eggleton, Charles D.; Desai, Sanjay A.; Marr, David W.M.
The mechanical properties of living cells are a label-free biophysical marker of cell viability and health; however, their use has been greatly limited by low measurement throughput. Although examining individual cells at high rates is now commonplace with fluorescence activated cell sorters, development of comparable techniques that nondestructively probe cell mechanics remains challenging. A fundamental hurdle is the signal response time. Where light scattering and fluorescence signatures are virtually instantaneous, the cell stress relaxation, typically occurring on the order of seconds, limits the potential speed of elastic property measurement. To overcome this intrinsic barrier to rapid analysis, we show here that cell viscoelastic properties measured at frequencies far higher than those associated with cell relaxation can be used as a means of identifying significant differences in cell phenotype. In these studies, we explore changes in erythrocyte mechanical properties caused by infection with Plasmodium falciparum and find that the elastic response alone fails to detect malaria at high frequencies. At timescales associated with rapid assays, however, we observe that the inelastic response shows significant changes and can be used as a reliable indicator of infection, establishing the dynamic viscoelasticity as a basis for nondestructive mechanical analogs of current high-throughput cell classification methods. PMID:24268140
Tickle, Ian; Sharff, Andrew; Vinkovic, Mladen; Yon, Jeff; Jhoti, Harren
Single crystal X-ray diffraction is the technique of choice for studying the interactions of small organic molecules with proteins by determining their three-dimensional structures; however the requirement for highly purified protein and lack of process automation have traditionally limited its use in this field. Despite these shortcomings, the use of crystal structures of therapeutically relevant drug targets in pharmaceutical research has increased significantly over the last decade. The application of structure-based drug design has resulted in several marketed drugs and is now an established discipline in most pharmaceutical companies. Furthermore, the recently published full genome sequences of Homo sapiens and a number of micro-organisms have provided a plethora of new potential drug targets that could be utilised in structure-based drug design programs. In order to take maximum advantage of this explosion of information, techniques have been developed to automate and speed up the various procedures required to obtain protein crystals of suitable quality, to collect and process the raw X-ray diffraction data into usable structural information, and to use three-dimensional protein structure as a basis for drug discovery and lead optimisation. This tutorial review covers the various technologies involved in the process pipeline for high-throughput protein crystallography as it is currently being applied to drug discovery. It is aimed at synthetic and computational chemists, as well as structural biologists, in both academia and industry, who are interested in structure-based drug design.
Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie
Spackman, Peter R.; Thomas, Sajesh P.; Jayatilaka, Dylan
Molecular shape is important in both crystallisation and supramolecular assembly, yet its role is not completely understood. We present a computationally efficient scheme to describe and classify the molecular shapes in crystals. The method involves rotation invariant description of Hirshfeld surfaces in terms of of spherical harmonic functions. Hirshfeld surfaces represent the boundaries of a molecule in the crystalline environment, and are widely used to visualise and interpret crystalline interactions. The spherical harmonic description of molecular shapes are compared and classified by means of principal component analysis and cluster analysis. When applied to a series of metals, the method results in a clear classification based on their lattice type. When applied to around 300 crystal structures comprising of series of substituted benzenes, naphthalenes and phenylbenzamide it shows the capacity to classify structures based on chemical scaffolds, chemical isosterism, and conformational similarity. The computational efficiency of the method is demonstrated with an application to over 14 thousand crystal structures. High throughput screening of molecular shapes and interaction surfaces in the Cambridge Structural Database (CSD) using this method has direct applications in drug discovery, supramolecular chemistry and materials design.
Brown, James M; Horner, Neil R; Lawson, Thomas N; Fiegel, Tanja; Greenaway, Simon; Morgan, Hugh; Ring, Natalie; Santos, Luis; Sneddon, Duncan; Teboul, Lydia; Vibert, Jennifer; Yaikhom, Gagarine; Westerberg, Henrik; Mallon, Ann-Marie
High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene-phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encompass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for themselves without the need for highly specialized software or technical expertise. In this article, we present a platform of software tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. Over 750 reconstructions from 80 embryonic lethal and subviable lines have been captured to date, all of which are openly accessible at mousephenotype.org. Although designed for the IMPC, all software is available under an open-source licence for others to use and develop further. Ongoing developments aim to increase throughput and improve the analysis and dissemination of image data. Furthermore, we aim to ensure that images are searchable so that users can locate relevant images associated with genes, phenotypes or human diseases of interest. © The Author 2016. Published by Oxford University Press.
Full Text Available Abstract Background Aptamers are oligonucleotides displaying specific binding properties for a predetermined target. They are selected from libraries of randomly synthesized candidates through an in vitro selection process termed SELEX (Systematic Evolution of Ligands by EXponential enrichment alternating selection and amplification steps. SELEX is followed by cloning and sequencing of the enriched pool of oligonucleotides to enable comparison of the selected sequences. The most represented candidates are then synthesized and their binding properties are individually evaluated thus leading to the identification of aptamers. These post-selection steps are time consuming and introduce a bias to the expense of poorly amplified binders that might be of high affinity and are consequently underrepresented. A method that would circumvent these limitations would be highly valuable. Results We describe a novel homogeneous solution-based method for screening large populations of oligonucleotide candidates generated from SELEX. This approach, based on the AlphaScreen® technology, is carried out on the exclusive basis of the binding properties of the selected candidates without the needs of performing a priori sequencing. It therefore enables the functional identification of high affinity aptamers. We validated the HAPIscreen (High throughput APtamer Identification screen methodology using aptamers targeted to RNA hairpins, previously identified in our laboratory. We then screened pools of candidates issued from SELEX rounds in a 384 well microplate format and identify new RNA aptamers to pre-microRNAs. Conclusions HAPIscreen, an Alphascreen®-based methodology for the identification of aptamers is faster and less biased than current procedures based on sequence comparison of selected oligonucleotides and sampling binding properties of few individuals. Moreover this methodology allows for screening larger number of candidates. Used here for selecting anti
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and
Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes
Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Ni, Jing [Iowa State Univ., Ames, IA (United States)
This work describes several research projects aimed towards developing new instruments and novel methods for high throughput chemical and biological analysis. Approaches are taken in two directions. The first direction takes advantage of well-established semiconductor fabrication techniques and applies them to miniaturize instruments that are workhorses in analytical laboratories. Specifically, the first part of this work focused on the development of micropumps and microvalves for controlled fluid delivery. The mechanism of these micropumps and microvalves relies on the electrochemically-induced surface tension change at a mercury/electrolyte interface. A miniaturized flow injection analysis device was integrated and flow injection analyses were demonstrated. In the second part of this work, microfluidic chips were also designed, fabricated, and tested. Separations of two fluorescent dyes were demonstrated in microfabricated channels, based on an open-tubular liquid chromatography (OT LC) or an electrochemically-modulated liquid chromatography (EMLC) format. A reduction in instrument size can potentially increase analysis speed, and allow exceedingly small amounts of sample to be analyzed under diverse separation conditions. The second direction explores the surface enhanced Raman spectroscopy (SERS) as a signal transduction method for immunoassay analysis. It takes advantage of the improved detection sensitivity as a result of surface enhancement on colloidal gold, the narrow width of Raman band, and the stability of Raman scattering signals to distinguish several different species simultaneously without exploiting spatially-separated addresses on a biochip. By labeling gold nanoparticles with different Raman reporters in conjunction with different detection antibodies, a simultaneous detection of a dual-analyte immunoassay was demonstrated. Using this scheme for quantitative analysis was also studied and preliminary dose-response curves from an immunoassay of a
High-throughput screening (HTS) experiments provide a valuable resource that reports biological activity of numerous chemical compounds relative to their molecular targets. Building computational models that accurately predict such activity status (active vs. inactive) in specific assays is a challenging task given the large volume of data and frequently small proportion of active compounds relative to the inactive ones. We developed a method, DRAMOTE, to predict activity status of chemical compounds in HTP activity assays. For a class of HTP assays, our method achieves considerably better results than the current state-of-the-art-solutions. We achieved this by modification of a minority oversampling technique. To demonstrate that DRAMOTE is performing better than the other methods, we performed a comprehensive comparison analysis with several other methods and evaluated them on data from 11 PubChem assays through 1,350 experiments that involved approximately 500,000 interactions between chemicals and their target proteins. As an example of potential use, we applied DRAMOTE to develop robust models for predicting FDA approved drugs that have high probability to interact with the thyroid stimulating hormone receptor (TSHR) in humans. Our findings are further partially and indirectly supported by 3D docking results and literature information. The results based on approximately 500,000 interactions suggest that DRAMOTE has performed the best and that it can be used for developing robust virtual screening models. The datasets and implementation of all solutions are available as a MATLAB toolbox online at www.cbrc.kaust.edu.sa/dramote and can be found on Figshare.
Gabdank, Idan; Chan, Esther T; Davidson, Jean M; Hilton, Jason A; Davis, Carrie A; Baymuradov, Ulugbek K; Narayanan, Aditi; Onate, Kathrina C; Graham, Keenan; Miyasato, Stuart R; Dreszer, Timothy R; Strattan, J Seth; Jolanki, Otto; Tanaka, Forrest Y; Hitz, Benjamin C
Abstract Prevention of unintended duplication is one of the ongoing challenges many databases have to address. Working with high-throughput sequencing data, the complexity of that challenge increases with the complexity of the definition of a duplicate. In a computational data model, a data object represents a real entity like a reagent or a biosample. This representation is similar to how a card represents a book in a paper library catalog. Duplicated data objects not only waste storage, they can mislead users into assuming the model represents more than the single entity. Even if it is clear that two objects represent a single entity, data duplication opens the door to potential inconsistencies between the objects since the content of the duplicated objects can be updated independently, allowing divergence of the metadata associated with the objects. Analogously to a situation in which a catalog in a paper library would contain by mistake two cards for a single copy of a book. If these cards are listing simultaneously two different individuals as current book borrowers, it would be difficult to determine which borrower (out of the two listed) actually has the book. Unfortunately, in a large database with multiple submitters, unintended duplication is to be expected. In this article, we present three principal guidelines the Encyclopedia of DNA Elements (ENCODE) Portal follows in order to prevent unintended duplication of both actual files and data objects: definition of identifiable data objects (I), object uniqueness validation (II) and de-duplication mechanism (III). In addition to explaining our modus operandi, we elaborate on the methods used for identification of sequencing data files. Comparison of the approach taken by the ENCODE Portal vs other widely used biological data repositories is provided. Database URL: https://www.encodeproject.org/
Bosco, Filippo; Hwu, En-Te; Chen, Ching-Hsiu
Sensors are crucial in many daily operations including security, environmental control, human diagnostics and patient monitoring. Screening and online monitoring require reliable and high-throughput sensing. We report on the demonstration of a high-throughput label-free sensor platform utilizing ...
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...
Cruz, Susana C.; Rothenberg, Gadi; Westerhuis, Johan A.; Smilde, Age K.
High-throughput experimentation and screening methods are changing work flows and creating new possibilities in biochemistry, organometallic chemistry, and catalysis. However, many high-throughput systems rely on off-line chromatography methods that shift the bottleneck to the analysis stage.
Harrison, Joe J; Turner, Raymond J; Ceri, Howard
Microbial biofilms exist all over the natural world, a distribution that is paralleled by metal cations and oxyanions. Despite this reality, very few studies have examined how biofilms withstand exposure to these toxic compounds. This article describes a batch culture technique for biofilm and planktonic cell metal susceptibility testing using the MBEC assay. This device is compatible with standard 96-well microtiter plate technology. As part of this method, a two part, metal specific neutralization protocol is summarized. This procedure minimizes residual biological toxicity arising from the carry-over of metals from challenge to recovery media. Neutralization consists of treating cultures with a chemical compound known to react with or to chelate the metal. Treated cultures are plated onto rich agar to allow metal complexes to diffuse into the recovery medium while bacteria remain on top to recover. Two difficulties associated with metal susceptibility testing were the focus of two applications of this technique. First, assays were calibrated to allow comparisons of the susceptibility of different organisms to metals. Second, the effects of exposure time and growth medium composition on the susceptibility of E. coli JM109 biofilms to metals were investigated. This high-throughput method generated 96-statistically equivalent biofilms in a single device and thus allowed for comparative and combinatorial experiments of media, microbial strains, exposure times and metals. By adjusting growth conditions, it was possible to examine biofilms of different microorganisms that had similar cell densities. In one example, Pseudomonas aeruginosa ATCC 27853 was up to 80 times more resistant to heavy metalloid oxyanions than Escherichia coli TG1. Further, biofilms were up to 133 times more tolerant to tellurite (TeO3(2-)) than corresponding planktonic cultures. Regardless of the growth medium, the tolerance of biofilm and planktonic cell E. coli JM109 to metals was time
The MELOX plant in the south of France together with the La Hague reprocessing plant, are part of the two industrial facilities in charge of closing the nuclear fuel cycle in France. Started up in 1995, MELOX has since accumulated a solid know-how in recycling plutonium recovered from spent uranium fuel into MOX: a fuel blend comprised of both uranium and plutonium oxides. Converting recovered Pu into a proliferation-resistant material that can readily be used to power a civil nuclear reactor, MOX fabrication offers a sustainable solution to safely take advantage of the plutonium's high energy content. Being the first large-capacity industrial facility dedicated to MOX fuel fabrication, MELOX distinguishes itself from the first generation MOX plants with high capacity (around 200 tHM versus around 40 tHM) and several unique operational features designed to improve productivity, reliability and flexibility while maintaining high safety standards. Providing an exemplary reference for high throughput MOX fabrication with 1,000 tHM produced since start-up, the unique process and technologies implemented at MELOX are currently inspiring other MOX plant construction projects (in Japan with the J-MOX plant, in the US and in Russia as part of the weapon-grade plutonium inventory reduction). Spurred by the growing international demand, MELOX has embarked upon an ambitious production development and diversification plan. Starting from an annual level of 100 tons of heavy metal (tHM), MELOX demonstrated production capacity is continuously increasing: MELOX is now aiming for a minimum of 140 tHM by the end of 2005, with the ultimate ambition of reaching the full capacity of the plant (around 200 tHM) in the near future. With regards to its activity, MELOX also remains deeply committed to sustainable development in a consolidated involvement within AREVA group. The French minister of Industry, on August 26th 2005, acknowledged the benefits of MOX fuel production at MELOX: 'In
Full Text Available Schistosomiasis, caused by infection with the blood fluke Schistosoma, is responsible for greater than 200,000 human deaths per annum. Objective high-throughput screens for detecting novel anti-schistosomal targets will drive 'genome to drug' lead translational science at an unprecedented rate. Current methods for detecting schistosome viability rely on qualitative microscopic criteria, which require an understanding of parasite morphology, and most importantly, must be subjectively interpreted. These limitations, in the current state of the art, have significantly impeded progress into whole schistosome screening for next generation chemotherapies.We present here a microtiter plate-based method for reproducibly detecting schistosomula viability that takes advantage of the differential uptake of fluorophores (propidium iodide and fluorescein diacetate by living organisms. We validate this high-throughput system in detecting schistosomula viability using auranofin (a known inhibitor of thioredoxin glutathione reductase, praziquantel and a range of small compounds with previously-described (gambogic acid, sodium salinomycin, ethinyl estradiol, fluoxetidine hydrochloride, miconazole nitrate, chlorpromazine hydrochloride, amphotericin b, niclosamide or suggested (bepridil, ciclopirox, rescinnamine, flucytosine, vinblastine and carbidopa anti-schistosomal activities. This developed method is sensitive (200 schistosomula/well can be assayed, relevant to industrial (384-well microtiter plate compatibility and academic (96-well microtiter plate compatibility settings, translatable to functional genomics screens and drug assays, does not require a priori knowledge of schistosome biology and is quantitative.The wide-scale application of this fluorescence-based bioassay will greatly accelerate the objective identification of novel therapeutic lead targets/compounds to combat schistosomiasis. Adapting this bioassay for use with other parasitic worm species
Schwämmle, Veit; Vaudel, Marc
Cell signaling and functions heavily rely on post-translational modifications (PTMs) of proteins. Their high-throughput characterization is thus of utmost interest for multiple biological and medical investigations. In combination with efficient enrichment methods, peptide mass spectrometry analy...
National Aeronautics and Space Administration — In response to Topic S3.04 "Propulsion Systems," Busek Co. Inc. will develop a high throughput Hall effect thruster with a nominal peak power of 1-kW and wide...
Full Text Available these clones using both random clone picking and high throughput sequencing. We demonstrate that random clone picking does not necessarily identify highly enriched clones. We further showed that the clone displaying the CPLHARLPC peptide which was identified...
National Aeronautics and Space Administration — In response to Topic S3-04 "Propulsion Systems," Busek proposes to develop a high throughput Hall effect thruster with a nominal peak power of 1-kW and wide...
Full Text Available Schistosomiasis is a tropical disease associated with high morbidity and mortality, currently affecting over 200 million people worldwide. Praziquantel is the only drug used to treat the disease, and with its increased use the probability of developing drug resistance has grown significantly. The Schistosoma parasites can survive for up to decades in the human host due in part to a unique set of antioxidant enzymes that continuously degrade the reactive oxygen species produced by the host's innate immune response. Two principal components of this defense system have been recently identified in S. mansoni as thioredoxin/glutathione reductase (TGR and peroxiredoxin (Prx and as such these enzymes present attractive new targets for anti-schistosomiasis drug development. Inhibition of TGR/Prx activity was screened in a dual-enzyme format with reducing equivalents being transferred from NADPH to glutathione via a TGR-catalyzed reaction and then to hydrogen peroxide via a Prx-catalyzed step. A fully automated quantitative high-throughput (qHTS experiment was performed against a collection of 71,028 compounds tested as 7- to 15-point concentration series at 5 microL reaction volume in 1536-well plate format. In order to generate a robust data set and to minimize the effect of compound autofluorescence, apparent reaction rates derived from a kinetic read were utilized instead of end-point measurements. Actives identified from the screen, along with previously untested analogues, were subjected to confirmatory experiments using the screening assay and subsequently against the individual targets in secondary assays. Several novel active series were identified which inhibited TGR at a range of potencies, with IC(50s ranging from micromolar to the assay response limit ( approximately 25 nM. This is, to our knowledge, the first report of a large-scale HTS to identify lead compounds for a helminthic disease, and provides a paradigm that can be used to jump
Chen, Hui; Jiang, Wen
The oral microbiome is one of most diversity habitat in the human body and they are closely related with oral health and disease. As the technique developing,, high throughput sequencing has become a popular approach applied for oral microbial analysis. Oral bacterial profiles have been studied to explore the relationship between microbial diversity and oral diseases such as caries and periodontal disease. This review describes the application of high-throughput sequencing for characterizati...
Ståhlman, Marcus; Ejsing, Christer S.; Tarasov, Kirill
the absolute quantification of hundreds of molecular glycerophospholipid species, glycerolipid species, sphingolipid species and sterol lipids. Future applications in clinical cohort studies demand detailed lipid molecule information and the application of high-throughput lipidomics platforms. In this review...... we describe a novel high-throughput shotgun lipidomic platform based on 96-well robot-assisted lipid extraction, automated sample infusion by mircofluidic-based nanoelectrospray ionization, and quantitative multiple precursor ion scanning analysis on a quadrupole time-of-flight mass spectrometer...
Wang, Jia; Zhu, Lin-yun; Liu, Qing; Hentzer, Morten; Smith, Garrick Paul; Wang, Ming-wei
To discover antagonists of the orphan G-protein coupled receptor GPR139 through high-throughput screening of a collection of diverse small molecules. Calcium mobilization assays were used to identify initial hits and for subsequent confirmation studies. Five small molecule antagonists, representing 4 different scaffolds, were identified following high-throughput screening of 16 000 synthetic compounds. The findings provide important tools for further study of this orphan G-protein coupled receptor.
Full Text Available Kinetoplastids differ from other organisms in their ability to conjugate glutathione and spermidine to form trypanothione which is involved in maintaining redox homeostasis and removal of toxic metabolites. It is also involved in drug resistance, antioxidant mechanism, and defense against cellular oxidants. Trypanothione synthetase (TryS of thiol metabolic pathway is the sole enzyme responsible for the biosynthesis of trypanothione in Leishmania donovani. In this study, TryS gene of L. donovani (LdTryS was cloned, expressed, and fusion protein purified with affinity column chromatography. The purified protein showed optimum enzymatic activity at pH 8.0-8.5. The TryS amino acids sequences alignment showed that all amino acids involved in catalytic and ligands binding of L. major are conserved in L. donovani. Subcellular localization using digitonin fractionation and immunoblot analysis showed that LdTryS is localized in the cytoplasm. Furthermore, RT-PCR coupled with immunoblot analysis showed that LdTryS is overexpressed in Amp B resistant and stationary phase promastigotes (∼ 2.0-folds than in sensitive strain and logarithmic phase, respectively, which suggests its involvement in Amp B resistance. Also, H2O2 treatment upto 150 µM for 8 hrs leads to 2-fold increased expression of LdTryS probably to cope up with oxidative stress generated by H2O2. Therefore, this study demonstrates stage- and Amp B sensitivity-dependent expression of LdTryS in L. donovani and involvement of TryS during oxidative stress to help the parasites survival.
Knap, J; Spear, C E; Borodin, O; Leiter, K W
We describe the development of a large-scale high-throughput application for discovery in materials science. Our point of departure is a computational framework for distributed multi-scale computation. We augment the original framework with a specialized module whose role is to route evaluation requests needed by the high-throughput application to a collection of available computational resources. We evaluate the feasibility and performance of the resulting high-throughput computational framework by carrying out a high-throughput study of battery solvents. Our results indicate that distributed multi-scale computing, by virtue of its adaptive nature, is particularly well-suited for building high-throughput applications.
Elahian, Fatemeh; Reiisi, Somayeh; Shahidi, Arman; Mirzaei, Seyed Abbas
A genetically modified Pichia pastoris strain overexpressing a metal-resistant variant of cytochrome b5 reductase enzyme was developed for silver and selenium biosorption and for nanoparticle production. The maximum recombinant enzyme expression level was approximately 31 IU/ml in the intercellular fluid after 24 h of incubation, and the capacity of the recombinant biomass for the biosorption of silver and selenium in aqueous batch models were measured as 163.90 and 63.71 mg/g, respectively. The ions were reduced in the presence of enzyme, leading to the formation of stable 70-180 nm metal nanoparticles. Various instrumental analyses confirmed the well-dispersed and crystalline nature of the spherical nanometals. The purified silver and selenium nanoparticles exhibited at least 10-fold less cytotoxicity toward HDF, EPG85-257, and T47D cells than silver nitrate and selenium dioxide. These results revealed that the engineered Pichia strain is an eco-friendly, rapid, high-throughput, and versatile reduction system for nanometal production. Copyright © 2016 Elsevier Inc. All rights reserved.
Cabrera-Bosquet, Llorenç; Crossa, José; von Zitzewitz, Jarislav; Serret, María Dolors; Araus, José Luis
Genomic selection (GS) and high-throughput phenotyping have recently been captivating the interest of the crop breeding community from both the public and private sectors world-wide. Both approaches promise to revolutionize the prediction of complex traits, including growth, yield and adaptation to stress. Whereas high-throughput phenotyping may help to improve understanding of crop physiology, most powerful techniques for high-throughput field phenotyping are empirical rather than analytical and comparable to genomic selection. Despite the fact that the two methodological approaches represent the extremes of what is understood as the breeding process (phenotype versus genome), they both consider the targeted traits (e.g. grain yield, growth, phenology, plant adaptation to stress) as a black box instead of dissecting them as a set of secondary traits (i.e. physiological) putatively related to the target trait. Both GS and high-throughput phenotyping have in common their empirical approach enabling breeders to use genome profile or phenotype without understanding the underlying biology. This short review discusses the main aspects of both approaches and focuses on the case of genomic selection of maize flowering traits and near-infrared spectroscopy (NIRS) and plant spectral reflectance as high-throughput field phenotyping methods for complex traits such as crop growth and yield. © 2012 Institute of Botany, Chinese Academy of Sciences.
Quintero, Catherine; Tran, Kristen; Szewczak, Alexander A
One high-throughput technology gaining widespread adoption in industry and academia is acoustic liquid dispensing, in which focused sound waves eject nanoliter-sized droplets from a solution into a recipient microplate. This technology allows for direct dispensing of small-molecule compounds or reagents dissolved in DMSO, while keeping a low final concentration of organic solvent in an assay. However, acoustic dispensing presents unique quality control (QC) challenges when measuring the accuracy and precision of small dispense volumes ranging from 2.5 to 100 nL. As part of an effort to develop a rapid and cost-effective QC method for acoustic dispensing of 100% DMSO, we implemented the first high-throughput photometric dual-dye-based QC protocol in the nanoliter volume range. This technical note validates the new photometric 100% DMSO QC method and highlights its cost-effectiveness when compared with conventional low-throughput fluorimetric QC methods. In addition, a potential software solution is described for the analysis, storage, and display of accumulated high-throughput QC data, called LabGauge. As the need for high-throughput QC grows, conventional low-throughput methods can no longer meet demand. Validated high-throughput techniques, such as the dual-dye photometric method, will need to be implemented.
Suram, Santosh K; Newhouse, Paul F; Gregoire, John M
High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2 O 3 , Cu 2 V 2 O 7 , and BiVO 4 . The applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.
Full Text Available Abstract Background Protein-protein interaction data used in the creation or prediction of molecular networks is usually obtained from large scale or high-throughput experiments. This experimental data is liable to contain a large number of spurious interactions. Hence, there is a need to validate the interactions and filter out the incorrect data before using them in prediction studies. Results In this study, we use a combination of 3 genomic features – structurally known interacting Pfam domains, Gene Ontology annotations and sequence homology – as a means to assign reliability to the protein-protein interactions in Saccharomyces cerevisiae determined by high-throughput experiments. Using Bayesian network approaches, we show that protein-protein interactions from high-throughput data supported by one or more genomic features have a higher likelihood ratio and hence are more likely to be real interactions. Our method has a high sensitivity (90% and good specificity (63%. We show that 56% of the interactions from high-throughput experiments in Saccharomyces cerevisiae have high reliability. We use the method to estimate the number of true interactions in the high-throughput protein-protein interaction data sets in Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens to be 27%, 18% and 68% respectively. Our results are available for searching and downloading at http://helix.protein.osaka-u.ac.jp/htp/. Conclusion A combination of genomic features that include sequence, structure and annotation information is a good predictor of true interactions in large and noisy high-throughput data sets. The method has a very high sensitivity and good specificity and can be used to assign a likelihood ratio, corresponding to the reliability, to each interaction.
Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L
Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.
Sastry, Anand; Monk, Jonathan M.; Tegel, Hanna
and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide...... the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. Availability and implementation: We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template...
Hook, K Delaney; Chambers, John T; Hili, Ryan
We have developed a novel high-throughput screening platform for the discovery of small-molecules catalysts for bond-forming reactions. The method employs an in vitro selection for bond-formation using amphiphilic DNA-encoded small molecules charged with reaction substrate, which enables selections to be conducted in a variety of organic or aqueous solvents. Using the amine-catalysed aldol reaction as a catalytic model and high-throughput DNA sequencing as a selection read-out, we demonstrate the 1200-fold enrichment of a known aldol catalyst from a library of 16.7-million uncompetitive library members.
Lu, Cheng; Shedge, Vikas
Small RNAs (smRNAs) play an essential role in virtually every aspect of growth and development, by regulating gene expression at the post-transcriptional and/or transcriptional level. New high-throughput sequencing technology allows for a comprehensive coverage of smRNAs in any given biological sample, and has been widely used for profiling smRNA populations in various developmental stages, tissue and cell types, or normal and disease states. In this article, we describe the method used in our laboratory to construct smRNA cDNA libraries for high-throughput sequencing.
Au, K.; Folkers, G.E.; Kaptein, R.
A collaborative project between two Structural Proteomics In Europe (SPINE) partner laboratories, York and Oxford, aimed at high-throughput (HTP) structure determination of proteins from Bacillus anthracis, the aetiological agent of anthrax and a biomedically important target, is described. Based
Raboy, Victor; Johnson, Amy; Bilyeu, Kristin
High-throughput/low-cost/low-tech methods for phytic acid determination that are sufficiently accurate and reproducible would be of value in plant genetics, crop breeding and in the food and feed industries. Variants of two candidate methods, those described by Vaintraub and Lapteva (Anal Biochem...
Black, Charikleia; Barker, John J; Hitchman, Richard B; Kwong, Hok Sau; Festenstein, Sam; Acton, Thomas B
We have developed a standardized and efficient workflow for high-throughput (HT) protein expression in E. coli and parallel purification which can be tailored to the downstream application of the target proteins. It includes a one-step purification for the purposes of functional assays and a two-step protocol for crystallographic studies, with the option of on-column tag removal.
Human ATAD5 is an excellent biomarker for identifying genotoxic compounds because ATADS protein levels increase post-transcriptionally following exposure to a variety of DNA damaging agents. Here we report a novel quantitative high-throughput ATAD5-Iuciferase assay that can moni...
Ovesná, J.; Slabý, O.; Toussaint, O.; Kodíček, M.; Maršík, Petr; Pouchová, V.; Vaněk, Tomáš
Roč. 99, E-S1 (2008), ES127-ES134 ISSN 0007-1145 R&D Projects: GA MŠk(CZ) 1P05OC054 Institutional research plan: CEZ:AV0Z50380511 Keywords : Nutrigenomics * Phytochemicals * High throughput platforms Subject RIV: GM - Food Processing Impact factor: 2.764, year: 2008
Alquezar-Planas, David E; Fordyce, Sarah Louise
Since the development of so-called "next generation" high-throughput sequencing in 2005, this technology has been applied to a variety of fields. Such applications include disease studies, evolutionary investigations, and ancient DNA. Each application requires a specialized protocol to ensure tha...
The US EPA’s ToxCast program is designed to assess chemical perturbations of molecular and cellular endpoints using a variety of high-throughput screening (HTS) assays. However, existing HTS assays have limited or no xenobiotic metabolism which could lead to a mischaracterization...
Shiga toxins 1 and 2 (Stx1 and Stx2) from Shiga toxin-producing E. coli (STEC) bacteria were simultaneously detected with a newly developed, high-throughput antibody microarray platform. The proteinaceous toxins were immobilized and sandwiched between biorecognition elements (monoclonal antibodies)...
Blume, H; Bourenkov, G P; Kosciesza, D; Bartunik, H D
The wiggler beamline BW6 at DORIS has been optimized for de-novo solution of protein structures on the basis of MAD phasing. Facilities for automatic data collection, rapid data transfer and storage, and online processing have been developed which provide adequate conditions for high-throughput applications, e.g., in structural genomics.
Switzar, L.; van Angeren, J.A; Pinkse, M; Kool, J.; Niessen, W.M.A.
A high-throughput sample preparation protocol based on the use of 96-well molecular weight cutoff (MWCO) filter plates was developed for shotgun proteomics of cell lysates. All sample preparation steps, including cell lysis, buffer exchange, protein denaturation, reduction, alkylation and
Strawberry (Fragaria L.) genotypes bear remarkable phenotypic similarity, even across ploidy levels. Additionally, breeding programs seek to introgress alleles from wild germplasm, so objective molecular description of genetic variation has great value. In this report, a high-throughput, robust prot...
Ladirat, S.E.; Schols, H.A.; Nauta, A.; Schoterman, M.H.C.; Keijser, B.J.F.; Montijn, R.C.; Gruppen, H.; Schuren, F.H.J.
Antibiotic treatments can lead to a disruption of the human microbiota. In this in-vitro study, the impact of antibiotics on adult intestinal microbiota was monitored in a new high-throughput approach: a fermentation screening-platform was coupled with a phylogenetic microarray analysis
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
Leticia Mara Lima Angelini
Full Text Available A modified colorimetric high-throughput screen based on pH changes combined with an amidase inhibitor capable of distinguishing between nitrilases and nitrile hydratases. This enzymatic screening is based on a binary response and is suitable for the first step of hierarchical screening projects.
Sadeghian Marnani, H.; Herfst, R.W.; Dool, T.C. van den; Crowcombe, W.E.; Winters, J.; Kramers, G.F.I.J.
Scanning probe microscopy (SPM) is a promising candidate for accurate assessment of metrology and defects on wafers and masks, however it has traditionally been too slow for high-throughput applications, although recent developments have significantly pushed the speed of SPM [1,2]. In this paper we
Kool, J.; Lingeman, H.; Niessen, W.M.A.; Irth, H.
Over the years, many different high throughput screening technologies and subsequently follow-up methodologies have been developed. All of these can be categorized, for example according to measurement of analyte classes, assay mechanisms, readout principles, or screening of drug target classes.
Menke, Karl C.
Across the pharmaceutical industry, there are a variety of approaches to laboratory automation for high throughput screening. At Sphinx Pharmaceuticals, the principles of industrial engineering have been applied to systematically identify and develop those automated solutions that provide the greatest value to the scientists engaged in lead generation. PMID:18924701
Menke, Karl C.
Across the pharmaceutical industry, there are a variety of approaches to laboratory automation for high throughput screening. At Sphinx Pharmaceuticals, the principles of industrial engineering have been applied to systematically identify and develop those automated solutions that provide the greatest value to the scientists engaged in lead generation.
Olivarius, Signe; Plessy, Charles; Carninci, Piero
We present a high-throughput method for investigating the transcriptional starting sites of genes of interest, which we named Deep-RACE (Deep–rapid amplification of cDNA ends). Taking advantage of the latest sequencing technology, it allows the parallel analysis of multiple genes and is free of t...
Moller, Isabel Eva; Sørensen, Iben; Bernal Giraldo, Adriana Jimena
We describe here a methodology that enables the occurrence of cell-wall glycans to be systematically mapped throughout plants in a semi-quantitative high-throughput fashion. The technique (comprehensive microarray polymer profiling, or CoMPP) integrates the sequential extraction of glycans from...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
Investigation of the predicted GIs in pathogens may lead to identification of potential drug/vaccine candidates. [Shrivastava S, Reddy Ch V S K and Mande S S 2010 INDeGenIUS, a new method for high-throughput identification of specialized functional islands in completely sequenced organisms; J. Biosci. 35 351–364] DOI ...
Michelet, Lorraine; Delannoy, Sabine; Devillers, Elodie
was conducted on 7050 Ixodes ricinus nymphs collected from France, Denmark, and the Netherlands using a powerful new high-throughput approach. This advanced methodology permitted the simultaneous detection of 25 bacterial, and 12 parasitic species (including; Borrelia, Anaplasma, Ehrlichia, Rickettsia...
Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...
High-throughput screening (HTPS) assays to detect inhibitors of thyroperoxidase (TPO), the enzymatic catalyst for thyroid hormone (TH) synthesis, are not currently available. Herein we describe the development of a HTPS TPO inhibition assay. Rat thyroid microsomes and a fluores...
Kampmann, Marie-Louise; Buchard, Anders; Børsting, Claus
Here, we demonstrate that punches from buccal swab samples preserved on FTA cards can be used for high-throughput DNA sequencing, also known as massively parallel sequencing (MPS). We typed 44 reference samples with the HID-Ion AmpliSeq Identity Panel using washed 1.2 mm punches from FTA cards wi...
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
Protein Array (RPPA)-based readout format integrated into robotic siRNA screening. This technique would allow post-screening high-throughput quantification of protein changes. Recently, breast cancer stem cells (BCSCs) have attracted much attention, as a tumor- and metastasis-driving subpopulation...
Zomer, A.L.; Burghout, P.J.; Bootsma, H.J.; Hermans, P.W.M.; Hijum, S.A.F.T. van
High-throughput analysis of genome-wide random transposon mutant libraries is a powerful tool for (conditional) essential gene discovery. Recently, several next-generation sequencing approaches, e.g. Tn-seq/INseq, HITS and TraDIS, have been developed that accurately map the site of transposon
D. Lee Taylor; Michael G. Booth; Jack W. McFarland; Ian C. Herriott; Niall J. Lennon; Chad Nusbaum; Thomas G. Marr
High throughput sequencing methods are widely used in analyses of microbial diversity but are generally applied to small numbers of samples, which precludes charaterization of patterns of microbial diversity across space and time. We have designed a primer-tagging approach that allows pooling and subsequent sorting of numerous samples, which is directed to...
United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...
Moreira Teixeira, Liliana; Leijten, Jeroen Christianus Hermanus; Sobral, J.; Jin, R.; van Apeldoorn, Aart A.; Feijen, Jan; van Blitterswijk, Clemens; Dijkstra, Pieter J.; Karperien, Hermanus Bernardus Johannes
Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous
Many chemicals in commerce today have undergone limited or no safety testing. To reduce the number of untested chemicals and prioritize limited testing resources, several governmental programs are using high-throughput in vitro screens for assessing chemical effects across multip...
Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas
Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (Φ PSII ). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of Φ PSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
Rajora Om P
Full Text Available Abstract Background High throughput DNA isolation from plants is a major bottleneck for most studies requiring large sample sizes. A variety of protocols have been developed for DNA isolation from plants. However, many species, including conifers, have high contents of secondary metabolites that interfere with the extraction process or the subsequent analysis steps. Here, we describe a procedure for high-throughput DNA isolation from conifers. Results We have developed a high-throughput DNA extraction protocol for conifers using an automated liquid handler and modifying the Qiagen MagAttract Plant Kit protocol. The modifications involve change to the buffer system and improving the protocol so that it almost doubles the number of samples processed per kit, which significantly reduces the overall costs. We describe two versions of the protocol: one for medium-throughput (MTP and another for high-throughput (HTP DNA isolation. The HTP version works from start to end in the industry-standard 96-well format, while the MTP version provides higher DNA yields per sample processed. We have successfully used the protocol for DNA extraction and genotyping of thousands of individuals of several spruce and a pine species. Conclusion A high-throughput system for DNA extraction from conifer needles and seeds has been developed and validated. The quality of the isolated DNA was comparable with that obtained from two commonly used methods: the silica-spin column and the classic CTAB protocol. Our protocol provides a fully automatable and cost effective solution for processing large numbers of conifer samples.
Li, Fenglei [Iowa State Univ., Ames, IA (United States)
The purposes of our research were: (1) To develop an economical, easy to use, automated, high throughput system for large scale protein crystallization screening. (2) To develop a new protein crystallization method with high screening efficiency, low protein consumption and complete compatibility with high throughput screening system. (3) To determine the structure of lactate dehydrogenase complexed with NADH by x-ray protein crystallography to study its inherent structural properties. Firstly, we demonstrated large scale protein crystallization screening can be performed in a high throughput manner with low cost, easy operation. The overall system integrates liquid dispensing, crystallization and detection and serves as a whole solution to protein crystallization screening. The system can dispense protein and multiple different precipitants in nanoliter scale and in parallel. A new detection scheme, native fluorescence, has been developed in this system to form a two-detector system with a visible light detector for detecting protein crystallization screening results. This detection scheme has capability of eliminating common false positives by distinguishing protein crystals from inorganic crystals in a high throughput and non-destructive manner. The entire system from liquid dispensing, crystallization to crystal detection is essentially parallel, high throughput and compatible with automation. The system was successfully demonstrated by lysozyme crystallization screening. Secondly, we developed a new crystallization method with high screening efficiency, low protein consumption and compatibility with automation and high throughput. In this crystallization method, a gas permeable membrane is employed to achieve the gentle evaporation required by protein crystallization. Protein consumption is significantly reduced to nanoliter scale for each condition and thus permits exploring more conditions in a phase diagram for given amount of protein. In addition
Woodruff, Kristina; Maerkl, Sebastian J.
Mammalian synthetic biology could be augmented through the development of high-throughput microfluidic systems that integrate cellular transfection, culturing, and imaging. We created a microfluidic chip that cultures cells and implements 280 independent transfections at up to 99% efficiency. The chip can perform co-transfections, in which the number of cells expressing each protein and the average protein expression level can be precisely tuned as a function of input DNA concentration and synthetic gene circuits can be optimized on chip. We co-transfected four plasmids to test a histidine kinase signaling pathway and mapped the dose dependence of this network on the level of one of its constituents. The chip is readily integrated with high-content imaging, enabling the evaluation of cellular behavior and protein expression dynamics over time. These features make the transfection chip applicable to high-throughput mammalian protein and synthetic biology studies. PMID:27030663
Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J
High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchanne...
Mercier, Kelly A.; Powers, Robert
High-throughput screening (HTS) using NMR spectroscopy has become a common component of the drug discovery effort and is widely used throughout the pharmaceutical industry. NMR provides additional information about the nature of small molecule-protein interactions compared to traditional HTS methods. In order to achieve comparable efficiency, small molecules are often screened as mixtures in NMR-based assays. Nevertheless, an analysis of the efficiency of mixtures and a corresponding determination of the optimum mixture size (OMS) that minimizes the amount of material and instrumentation time required for an NMR screen has been lacking. A model for calculating OMS based on the application of the hypergeometric distribution function to determine the probability of a 'hit' for various mixture sizes and hit rates is presented. An alternative method for the deconvolution of large screening mixtures is also discussed. These methods have been applied in a high-throughput NMR screening assay using a small, directed library
Yoshimoto, Nobuo; Kida, Akiko; Jie, Xu; Kurokawa, Masaya; Iijima, Masumi; Niimi, Tomoaki; Maturana, Andrés D.; Nikaido, Itoshi; Ueda, Hiroki R.; Tatematsu, Kenji; Tanizawa, Katsuyuki; Kondo, Akihiko; Fujii, Ikuo; Kuroda, Shun'ichi
When establishing the most appropriate cells from the huge numbers of a cell library for practical use of cells in regenerative medicine and production of various biopharmaceuticals, cell heterogeneity often found in an isogenic cell population limits the refinement of clonal cell culture. Here, we demonstrated high-throughput screening of the most suitable cells in a cell library by an automated undisruptive single-cell analysis and isolation system, followed by expansion of isolated single cells. This system enabled establishment of the most suitable cells, such as embryonic stem cells with the highest expression of the pluripotency marker Rex1 and hybridomas with the highest antibody secretion, which could not be achieved by conventional high-throughput cell screening systems (e.g., a fluorescence-activated cell sorter). This single cell-based breeding system may be a powerful tool to analyze stochastic fluctuations and delineate their molecular mechanisms. PMID:23378922
Hura, Greg L.; Menon, Angeli L.; Hammel, Michal; Rambo, Robert P.; Poole II, Farris L.; Tsutakawa, Susan E.; Jenney Jr, Francis E.; Classen, Scott; Frankel, Kenneth A.; Hopkins, Robert C.; Yang, Sungjae; Scott, Joseph W.; Dillard, Bret D.; Adams, Michael W. W.; Tainer, John A.
We present an efficient pipeline enabling high-throughput analysis of protein structure in solution with small angle X-ray scattering (SAXS). Our SAXS pipeline combines automated sample handling of microliter volumes, temperature and anaerobic control, rapid data collection and data analysis, and couples structural analysis with automated archiving. We subjected 50 representative proteins, mostly from Pyrococcus furiosus, to this pipeline and found that 30 were multimeric structures in solution. SAXS analysis allowed us to distinguish aggregated and unfolded proteins, define global structural parameters and oligomeric states for most samples, identify shapes and similar structures for 25 unknown structures, and determine envelopes for 41 proteins. We believe that high-throughput SAXS is an enabling technology that may change the way that structural genomics research is done.
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
into automated robotic high-throughput screens, which allows subsequent protein quantification. In this integrated solution, samples are directly forwarded to automated cell lysate preparation and preparation of dilution series, including reformatting to a protein spotter-compatible format after the high......High-throughput screening of genome wide siRNA- or compound libraries is currently applied for drug target and drug discovery. Commonly, these approaches deal with sample numbers ranging from 100,000 to several millions. Efforts to decrease costs and to increase information gained include......-throughput screening. Tracking of huge sample numbers and data analysis from a high-content screen to RPPAs is accomplished via MIRACLE, a custom made software suite developed by us. To this end, we demonstrate that the RPPAs generated in this manner deliver reliable protein readouts and that GAPDH and TFR levels can...
Arapan, S.; Nieves, P.; Cuesta-López, S.
We study the capability of a structure predicting method based on genetic/evolutionary algorithm for a high-throughput exploration of magnetic materials. We use the USPEX and VASP codes to predict stable and generate low-energy meta-stable structures for a set of representative magnetic structures comprising intermetallic alloys, oxides, interstitial compounds, and systems containing rare-earths elements, and for both types of ferromagnetic and antiferromagnetic ordering. We have modified the interface between USPEX and VASP codes to improve the performance of structural optimization as well as to perform calculations in a high-throughput manner. We show that exploring the structure phase space with a structure predicting technique reveals large sets of low-energy metastable structures, which not only improve currently exiting databases, but also may provide understanding and solutions to stabilize and synthesize magnetic materials suitable for permanent magnet applications.
Yiu Wai Lai, Michael Krause, Alan Savan, Sigurd Thienhaus, Nektarios Koukourakis, Martin R Hofmann and Alfred Ludwig
Full Text Available A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.
Full Text Available The selectivity of the adaptive immune response is based on the enormous diversity of T and B cell antigen-specific receptors. The immune repertoire, the collection of T and B cells with functional diversity in the circulatory system at any given time, is dynamic and reflects the essence of immune selectivity. In this article, we review the recent advances in immune repertoire study of infectious diseases that achieved by traditional techniques and high-throughput sequencing techniques. High-throughput sequencing techniques enable the determination of complementary regions of lymphocyte receptors with unprecedented efficiency and scale. This progress in methodology enhances the understanding of immunologic changes during pathogen challenge, and also provides a basis for further development of novel diagnostic markers, immunotherapies and vaccines.
Bedenbaugh, John E; Kim, Sungtak; Sasmaz, Erdem; Lauterbach, Jochen
Portable power technologies for military applications necessitate the production of fuels similar to LPG from existing feedstocks. Catalytic cracking of military jet fuel to form a mixture of C₂-C₄ hydrocarbons was investigated using high-throughput experimentation. Cracking experiments were performed in a gas-phase, 16-sample high-throughput reactor. Zeolite ZSM-5 catalysts with low Si/Al ratios (≤25) demonstrated the highest production of C₂-C₄ hydrocarbons at moderate reaction temperatures (623-823 K). ZSM-5 catalysts were optimized for JP-8 cracking activity to LPG through varying reaction temperature and framework Si/Al ratio. The reducing atmosphere required during catalytic cracking resulted in coking of the catalyst and a commensurate decrease in conversion rate. Rare earth metal promoters for ZSM-5 catalysts were screened to reduce coking deactivation rates, while noble metal promoters reduced onset temperatures for coke burnoff regeneration.
Mei, Feng; Fancy, Stephen P J; Shen, Yun-An A; Niu, Jianqin; Zhao, Chao; Presley, Bryan; Miao, Edna; Lee, Seonok; Mayoral, Sonia R; Redmond, Stephanie A; Etxeberria, Ainhoa; Xiao, Lan; Franklin, Robin J M; Green, Ari; Hauser, Stephen L; Chan, Jonah R
Functional screening for compounds that promote remyelination represents a major hurdle in the development of rational therapeutics for multiple sclerosis. Screening for remyelination is problematic, as myelination requires the presence of axons. Standard methods do not resolve cell-autonomous effects and are not suited for high-throughput formats. Here we describe a binary indicant for myelination using micropillar arrays (BIMA). Engineered with conical dimensions, micropillars permit resolution of the extent and length of membrane wrapping from a single two-dimensional image. Confocal imaging acquired from the base to the tip of the pillars allows for detection of concentric wrapping observed as 'rings' of myelin. The platform is formatted in 96-well plates, amenable to semiautomated random acquisition and automated detection and quantification. Upon screening 1,000 bioactive molecules, we identified a cluster of antimuscarinic compounds that enhance oligodendrocyte differentiation and remyelination. Our findings demonstrate a new high-throughput screening platform for potential regenerative therapeutics in multiple sclerosis.
Jagannadh, Veerendra Kalyan; Bhat, Bindu Prabhath; Nirupa Julius, Lourdes Albina; Gorthi, Sai Siva
In this article, we present a novel approach to throughput enhancement in miniaturized microfluidic microscopy systems. Using the presented approach, we demonstrate an inexpensive yet high-throughput analytical instrument. Using the high-throughput analytical instrument, we have been able to achieve about 125,880 cells per minute (more than one hundred and twenty five thousand cells per minute), even while employing cost-effective low frame rate cameras (120 fps). The throughput achieved here is a notable progression in the field of diagnostics as it enables rapid quantitative testing and analysis. We demonstrate the applicability of the instrument to point-of-care diagnostics, by performing blood cell counting. We report a comparative analysis between the counts (in cells per μl) obtained from our instrument, with that of a commercially available hematology analyzer.
Microdroplets offer unique compartments for accommodating a large number of chemical and biological reactions in tiny volume with precise control. A major concern in droplet-based microfluidics is the difficulty to address droplets individually and achieve high throughput at the same time. Here, we have combined an improved cartridge sampling technique with a microfluidic chip to perform droplet screenings and aggressive reaction with minimal (nanoliter-scale) reagent consumption. The droplet composition, distance, volume (nanoliter to subnanoliter scale), number, and sequence could be precisely and digitally programmed through the improved sampling technique, while sample evaporation and cross-contamination are effectively eliminated. Our combined device provides a simple model to utilize multiple droplets for various reactions with low reagent consumption and high throughput. © 2012 American Chemical Society.
Suram, Santosh K.; Newhouse, Paul F.; Zhou, Lan; Van Campen, Douglas G.; Mehta, Apurva; Gregoire, John M.
Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4 V 1.5 Fe 0.5 O 10.5 as a light absorber with direct band gap near 2.7 eV. Here, the strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platform for identifying new optical materials.
The pharmaceutical industry has come under increasing pressure due to regulatory restrictions on the marketing and pricing of drugs, competition, and the escalating costs of developing new drugs. These forces can be addressed by the identification of novel targets, reductions in the development time of new drugs, and increased productivity. Emphasis has been placed on identifying and validating new targets and on lead generation: the response from industry has been very evident in genomics and high throughput screening, where new technologies have been applied, usually coupled with a high degree of automation. The combination of numerous new potential biological targets and the ability to screen large numbers of compounds against many of these targets has generated the need for large diverse compound collections. To address this requirement, high-throughput chemistry has become an integral part of the drug discovery process. Copyright 2002 Wiley-Liss, Inc.
Lee, Tsung-Hua; Chang, Jo-Shu; Wang, Hsiang-Yu
Microalgae have emerged as one of the most promising feedstocks for biofuels and bio-based chemical production. However, due to the lack of effective tools enabling rapid and high-throughput analysis of the content of microalgae biomass, the efficiency of screening and identification of microalgae with desired functional components from the natural environment is usually quite low. Moreover, the real-time monitoring of the production of target components from microalgae is also difficult. Recently, research efforts focusing on overcoming this limitation have started. In this review, the recent development of high-throughput methods for analyzing microalgae cellular contents is summarized. The future prospects and impacts of these detection methods in microalgae-related processing and industries are also addressed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Huang, Haiming; Sidhu, Sachdev S
Peptide recognition modules (PRMs) play critical roles in cellular processes, including differentiation, proliferation and cytoskeleton organization. PRMs normally bind to short linear motifs in protein ligands, and by so doing recruit proteins into signaling complexes. Based on the binding specificity profile of a PRM, one can predict putative natural interaction partners by searching genome databases. Candidate interaction partners can in turn provide clues to assemble potential in vivo protein complexes that the PRM may be involved with. Combinatorial peptide libraries have proven to be effective tools for profiling the binding specificities of PRMs. Herein, we describe high-throughput methods for the expression and purification of PRM proteins and the use of peptide-phage libraries for PRM specificity profiling. These high-throughput methods greatly expedite the study of PRM families on a genome-wide scale.
Liu, X; Painter, R E; Enesa, K; Holmes, D; Whyte, G; Garlisi, C G; Monsma, F J; Rehak, M; Craig, F F; Smith, C A
The prevalence of clinically-relevant bacterial strains resistant to current antibiotic therapies is increasing and has been recognized as a major health threat. For example, multidrug-resistant tuberculosis and methicillin-resistant Staphylococcus aureus are of global concern. Novel methodologies are needed to identify new targets or novel compounds unaffected by pre-existing resistance mechanisms. Recently, water-in-oil picodroplets have been used as an alternative to conventional high-throughput methods, especially for phenotypic screening. Here we demonstrate a novel microfluidic-based picodroplet platform which enables high-throughput assessment and isolation of antibiotic-resistant bacteria in a label-free manner. As a proof-of-concept, the system was used to isolate fusidic acid-resistant mutants and estimate the frequency of resistance among a population of Escherichia coli (strain HS151). This approach can be used for rapid screening of rare antibiotic-resistant mutants to help identify novel compound/target pairs.
Tani, Hidenori; Akimitsu, Nobuyoshi; Fujita, Osamu; Matsuda, Yasuyoshi; Miyata, Ryo; Tsuneda, Satoshi; Igarashi, Masayuki; Sekiguchi, Yuji; Noda, Naohiro
We have developed a novel high-throughput screening assay of hepatitis C virus (HCV) nonstructural protein 3 (NS3) helicase inhibitors using the fluorescence-quenching phenomenon via photoinduced electron transfer between fluorescent dyes and guanine bases. We prepared double-stranded DNA (dsDNA) with a 5'-fluorescent-dye (BODIPY FL)-labeled strand hybridized with a complementary strand, the 3'-end of which has guanine bases. When dsDNA is unwound by helicase, the dye emits fluorescence owing to its release from the guanine bases. Our results demonstrate that this assay is suitable for quantitative assay of HCV NS3 helicase activity and useful for high-throughput screening for inhibitors. Furthermore, we applied this assay to the screening for NS3 helicase inhibitors from cell extracts of microorganisms, and found several cell extracts containing potential inhibitors.
Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.
Pärnamaa, Tanel; Parts, Leopold
High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy. Copyright © 2017 Parnamaa and Parts.
Cao, Kevin; Liu, Yang; Tucker, Christopher; Baumann, Michael; Grit, Grote; Lakso, Steven
A new method was developed to measure the rheology of extrudable ceramic pastes using a Hamilton MicroLab Star liquid handler. The Hamilton instrument, normally used for high throughput liquid processing, was expanded to function as a low pressure capillary rheometer. Diluted ceramic pastes were forced through the modified pipettes, which produced pressure drop data that was converted to standard rheology data. A known ceramic paste containing cellulose ether was made and diluted to various concentrations in water. The most dilute paste samples were tested in the Hamilton instrument and the more typical, highly concentrated, ceramic paste were tested with a hydraulic ram extruder fitted with a capillary die and pressure measurement system. The rheology data from this study indicates that the dilute high throughput method using the Hamilton instrument correlates to, and can predict, the rheology of concentrated ceramic pastes normally used in ceramic extrusion production processes.
Wang, Qiang; Waterhouse, Nicklas; Feyijinmi, Olusegun; Dominguez, Matthew J.; Martinez, Lisa M.; Sharp, Zoey; Service, Rachel; Bothe, Jameson R.; Stollar, Elliott J.
The kinetics of folding and unfolding underlie protein stability and quantification of these rates provides important insights into the folding process. Here, we present a simple high throughput protein unfolding kinetic assay using a plate reader that is applicable to the studies of the majority of 2-state folding proteins. We validate the assay by measuring kinetic unfolding data for the SH3 (Src Homology 3) domain from Actin Binding Protein 1 (AbpSH3) and its stabilized mutants. The results of our approach are in excellent agreement with published values. We further combine our kinetic assay with a plate reader equilibrium assay, to obtain indirect estimates of folding rates and use these approaches to characterize an AbpSH3-peptide hybrid. Our high throughput protein unfolding kinetic assays allow accurate screening of libraries of mutants by providing both kinetic and equilibrium measurements and provide a means for in-depth ϕ-value analyses. PMID:26745729
Laura A. Smith Callahan
Full Text Available Combinatorial method/high throughput strategies, which have long been used in the pharmaceutical industry, have recently been applied to hydrogel optimization for tissue engineering applications. Although many combinatorial methods have been developed, few are suitable for use in tissue engineering hydrogel optimization. Currently, only three approaches (design of experiment, arrays and continuous gradients have been utilized. This review highlights recent work with each approach. The benefits and disadvantages of design of experiment, array and continuous gradient approaches depending on study objectives and the general advantages of using combinatorial methods for hydrogel optimization over traditional optimization strategies will be discussed. Fabrication considerations for combinatorial method/high throughput samples will additionally be addressed to provide an assessment of the current state of the field, and potential future contributions to expedited material optimization and design.
Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.
A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes a...
Fan, Beiyuan; Li, Xiufeng; Chen, Deyong; Peng, Hongshang; Wang, Junbo; Chen, Jian
This article reviews recent developments in microfluidic systems enabling high-throughput characterization of single-cell proteins. Four key perspectives of microfluidic platforms are included in this review: (1) microfluidic fluorescent flow cytometry; (2) droplet based microfluidic flow cytometry; (3) large-array micro wells (microengraving); and (4) large-array micro chambers (barcode microchips). We examine the advantages and limitations of each technique and discuss future research oppor...
Hasan, Raqibul; Taha, Tarek
General purpose computing systems are used for a large variety of applications. Extensive supports for flexibility in these systems limit their energy efficiencies. Neural networks, including deep networks, are widely used for signal processing and pattern recognition applications. In this paper we propose a multicore architecture for deep neural network based processing. Memristor crossbars are utilized to provide low power high throughput execution of neural networks. The system has both tr...
Woodruff, Kristina Pan
With the advent of high-throughput and genome-wide screening initiatives, there is a need for improved methods for cell-based assays. Current approaches require expensive equipment, rely on large-scale culturing formats not suited for small or rare sample types, or involve tedious manual handling. Microfluidic systems could provide a solution to these limitations, since these assays are accessible, miniaturized, and automated. When coupled with high-content analysis, microfluidics has the pot...
Jia, Baolei; Jeon, Che Ok
The ease of genetic manipulation, low cost, rapid growth and number of previous studies have made Escherichia coli one of the most widely used microorganism species for producing recombinant proteins. In this post-genomic era, challenges remain to rapidly express and purify large numbers of proteins for academic and commercial purposes in a high-throughput manner. In this review, we describe several state-of-the-art approaches that are suitable for the cloning, expression and purification, co...
Puskás, László G; Ménesi, Dalma; Fehér, Liliána Z; Kitajka, Klára
The applications of 'omics' (genomics, transcriptomics, proteomics and metabolomics) technologies in nutritional studies have opened new possibilities to understand the effects and the action of different diets both in healthy and diseased states and help to define personalized diets and to develop new drugs that revert or prevent the negative dietary effects. Several single nucleotide polymorphisms have already been investigated for potential gene-diet interactions in the response to different lipid diets. It is also well-known that besides the known cellular effects of lipid nutrition, dietary lipids influence gene expression in a tissue, concentration and age-dependent manner. Protein expression and post-translational changes due to different diets have been reported as well. To understand the molecular basis of the effects and roles of dietary lipids high-throughput functional genomic methods such as DNA- or protein microarrays, high-throughput NMR and mass spectrometry are needed to assess the changes in a global way at the genome, at the transcriptome, at the proteome and at the metabolome level. The present review will focus on different high-throughput technologies from the aspects of assessing the effects of dietary fatty acids including cholesterol and polyunsaturated fatty acids. Several genes were identified that exhibited altered expression in response to fish-oil treatment of human lung cancer cells, including protein kinase C, natriuretic peptide receptor-A, PKNbeta, interleukin-1 receptor associated kinase-1 (IRAK-1) and diacylglycerol kinase genes by using high-throughput quantitative real-time PCR. Other results will also be mentioned obtained from cholesterol and polyunsaturated fatty acid fed animals by using DNA- and protein microarrays.
Podolská, Kateřina; Sedlák, David; Bartůněk, Petr; Svoboda, Petr
Roč. 19, č. 3 (2014), s. 417-426 ISSN 1087-0571 R&D Projects: GA ČR GA13-29531S; GA MŠk(CZ) LC06077; GA MŠk LM2011022 Grant - others:EMBO(DE) 1483 Institutional support: RVO:68378050 Keywords : Dicer * siRNA * high-throughput screening Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.423, year: 2014
In this communication, we report a newly developed cell pattering methodology by a silicon-based stencil, which exhibited advantages such as easy handling, reusability, hydrophilic surface and mature fabrication technologies. Cell arrays obtained by this method were used to investigate cell growth under a temperature gradient, which demonstrated the possibility of studying cell behavior in a high-throughput assay. This journal is © The Royal Society of Chemistry 2011.
Emanuel, George; Moffitt, Jeffrey R; Zhuang, Xiaowei
We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in individual cells. We achieve genotyping by introducing barcoded genetic variants into cells as pooled libraries and reading the barcodes out using massively multiplexed fluorescence in situ hybridization. To demonstrate the power of image-based pooled screening, we identified brighter and more photostable variants of the fluorescent protein YFAST among 60,000 variants.
Bryant, William A; Sternberg, Michael JE; Pinney, John W
Background With the continued proliferation of high-throughput biological experiments, there is a pressing need for tools to integrate the data produced in ways that produce biologically meaningful conclusions. Many microarray studies have analysed transcriptomic data from a pathway perspective, for instance by testing for KEGG pathway enrichment in sets of upregulated genes. However, the increasing availability of species-specific metabolic models provides the opportunity to analyse these da...
Schumacher, Jorn; Plessl, Christian; Vandelli, Wainer
HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ appli...
Ernstsen, Christina L; Login, Frédéric H; Jensen, Helene H
Quantification of intracellular bacterial colonies is useful in strategies directed against bacterial attachment, subsequent cellular invasion and intracellular proliferation. An automated, high-throughput microscopy-method was established to quantify the number and size of intracellular bacterial...... of cell nuclei were automatically quantified using a spot detection-tool. The spot detection-output was exported to Excel, where data analysis was performed. In this article, micrographs and spot detection data are made available to facilitate implementation of the method....
Van Nostrand, Joy D.; Liang, Yuting; He, Zhili; Li, Guanghe; Zhou, Jizhong
GeoChip is a comprehensive functional gene array that targets key functional genes involved in the geochemical cycling of N, C, and P, sulfate reduction, metal resistance and reduction, and contaminant degradation. Studies have shown the GeoChip to be a sensitive, specific, and high-throughput tool for microbial community analysis that has the power to link geochemical processes with microbial community structure. However, several challenges remain regarding the development and applications of microarrays for microbial community analysis.
William R. Kenealy; Thomas W. Jeffries
High-throughput screening requires simple assays that give reliable quantitative results. A microplate assay was developed for reducing sugar analysis that uses a 2,2'-bicinchoninic-based protein reagent. Endo-1,4-Ã¢-D-xylanase activity against oat spelt xylan was detected at activities of 0.002 to 0.011 IU ml−1. The assay is linear for sugar...
Lagus, Todd P.; Edd, Jon F.
Microfluidic encapsulation methods have been previously utilized to capture cells in picoliter-scale aqueous, monodisperse drops, providing confinement from a bulk fluid environment with applications in high throughput screening, cytometry, and mass spectrometry. We describe a method to not only encapsulate single cells, but to repeatedly capture a set number of cells (here we demonstrate one- and two-cell encapsulation) to study both isolation and the interactions between cells in groups of ...
Dhayakaran, Rekha; Neethirajan, Suresh; Weng, Xuan
Background Antimicrobial resistance is a great concern in the medical community, as well as food industry. Soy peptides were tested against bacterial biofilms for their antimicrobial activity. A high throughput drug screening assay was developed using microfluidic technology, RAMAN spectroscopy, and optical microscopy for rapid screening of antimicrobials and rapid identification of pathogens. Methods Synthesized PGTAVFK and IKAFKEATKVDKVVVLWTA soy peptides were tested against Pseudomonas aer...
Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard
In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.
Full Text Available The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.
Full Text Available Amplicon-based sequencing strategies that include 16S rRNA and functional genes, alongside “meta-omics” analyses of communities of microorganisms, have allowed researchers to pose questions and find answers to “who” is present in the environment and “what” they are doing. Next-generation sequencing approaches that aid microbial ecology studies of agricultural systems are fast gaining popularity among agronomy, crop, soil, and environmental science researchers. Given the rapid development of these high-throughput sequencing techniques, researchers with no prior experience will desire information about the best practices that can be used before actually starting high-throughput amplicon-based sequence analyses. We have outlined items that need to be carefully considered in experimental design, sampling, basic bioinformatics, sequencing of mock communities and negative controls, acquisition of metadata, and in standardization of reaction conditions as per experimental requirements. Not all considerations mentioned here may pertain to a particular study. The overall goal is to inform researchers about considerations that must be taken into account when conducting high-throughput microbial DNA sequencing and sequences analysis.
Full Text Available Abstract Background The differentiation of naive T and B cells into memory lymphocytes is essential for immunity to pathogens. Therapeutic manipulation of this cellular differentiation program could improve vaccine efficacy and the in vitro expansion of memory cells. However, chemical screens to identify compounds that induce memory differentiation have been limited by 1 the lack of reporter-gene or functional assays that can distinguish naive and memory-phenotype T cells at high throughput and 2 a suitable cell-line representative of naive T cells. Results Here, we describe a method for gene-expression based screening that allows primary naive and memory-phenotype lymphocytes to be discriminated based on complex genes signatures corresponding to these differentiation states. We used ligation-mediated amplification and a fluorescent, bead-based detection system to quantify simultaneously 55 transcripts representing naive and memory-phenotype signatures in purified populations of human T cells. The use of a multi-gene panel allowed better resolution than any constituent single gene. The method was precise, correlated well with Affymetrix microarray data, and could be easily scaled up for high-throughput. Conclusion This method provides a generic solution for high-throughput differentiation screens in primary human T cells where no single-gene or functional assay is available. This screening platform will allow the identification of small molecules, genes or soluble factors that direct memory differentiation in naive human lymphocytes.
Najafov, Jamil; Najafov, Ayaz
Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.
Schuster, Andre; Bruno, Kenneth S.; Collett, James R.; Baker, Scott E.; Seiboth, Bernhard; Kubicek, Christian P.; Schmoll, Monika
The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina), represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. RESULTS: Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ) by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414.CONCLUSIONS:Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.
Wang, Daqian; Ding, Lili; Zhang, Wei; Zhang, Enyao; Yu, Xinglong; Luo, Zhaofeng; Ou, Huichao
A new high-throughput surface plasmon resonance (SPR) biosensor based on differential interferometric imaging is reported. The two SPR interferograms of the sensing surface are imaged on two CCD cameras. The phase difference between the two interferograms is 180°. The refractive index related factor (RIRF) of the sensing surface is calculated from the two simultaneously acquired interferograms. The simulation results indicate that the RIRF exhibits a linear relationship with the refractive index of the sensing surface and is unaffected by the noise, drift and intensity distribution of the light source. The affinity and kinetic information can be extracted in real time from continuously acquired RIRF distributions. The results of refractometry experiments show that the dynamic detection range of SPR differential interferometric imaging system can be over 0.015 refractive index unit (RIU). High refractive index resolution is down to 0.45 RU (1 RU = 1 × 10 −6 RIU). Imaging and protein microarray experiments demonstrate the ability of high-throughput detection. The aptamer experiments demonstrate that the SPR sensor based on differential interferometric imaging has a great capability to be implemented for high-throughput aptamer kinetic evaluation. These results suggest that this biosensor has the potential to be utilized in proteomics and drug discovery after further improvement. (paper)
Coolbaugh, M J; Shakalli Tang, M J; Wood, D W
High throughput methods for recombinant protein production using E. coli typically involve the use of affinity tags for simple purification of the protein of interest. One drawback of these techniques is the occasional need for tag removal before study, which can be hard to predict. In this work, we demonstrate two high throughput purification methods for untagged protein targets based on simple and cost-effective self-cleaving intein tags. Two model proteins, E. coli beta-galactosidase (βGal) and superfolder green fluorescent protein (sfGFP), were purified using self-cleaving versions of the conventional chitin-binding domain (CBD) affinity tag and the nonchromatographic elastin-like-polypeptide (ELP) precipitation tag in a 96-well filter plate format. Initial tests with shake flask cultures confirmed that the intein purification scheme could be scaled down, with >90% pure product generated in a single step using both methods. The scheme was then validated in a high throughput expression platform using 24-well plate cultures followed by purification in 96-well plates. For both tags and with both target proteins, the purified product was consistently obtained in a single-step, with low well-to-well and plate-to-plate variability. This simple method thus allows the reproducible production of highly pure untagged recombinant proteins in a convenient microtiter plate format. Copyright © 2016 Elsevier Inc. All rights reserved.
Full Text Available Abstract Background The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina, represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. Results Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and then transformed a T. reesei strain deficient in non-homologous end joining (NHEJ by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414. Conclusions Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.
Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C
Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.
Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua
The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particular important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single gene experiments, creating the need for fast, flexible and reliable cloning systems. These collections of open reading frame (ORF) clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator™ DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP12). Details can be found at http://www.proteomicstutorials.org. PMID:23457047
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Prashar, Ankush; Yildiz, Jane; McNicol, James W.; Bryan, Glenn J.; Jones, Hamlyn G.
The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions. PMID:23762433
Mazis, A.; Hiller, J.; Morgan, P.; Awada, T.; Stoerger, V.
High throughput plant phenotyping is increasingly being used to assess morphological and biophysical traits of economically important crops in agriculture. In this study, the potential application of this technique in natural resources management, through the characterization of woody plants regeneration, establishment, growth, and responses to water and nutrient manipulations was assessed. Two woody species were selected for this study, Quercus prinoides and Quercus bicolor. Seeds were collected from trees growing at the edge of their natural distribution in Nebraska and Missouri, USA. Seeds were germinated in the greenhouse and transferred to the Nebraska Innovation Campus Lemnatec3D High Throughput facility at the University of Nebraska-Lincoln. Seedlings subjected to water and N manipulations, were imaged twice or three times a week using four cameras (Visible, Fluorescence, Infrared and Hyperspectral), throughout the growing season. Traditional leaf to plant levels ecophysiological measurements were concurrently acquired to assess the relationship between these two techniques. These include gas exchange (LI 6400 and LI 6800, LICOR Inc., Lincoln NE), chlorophyll content, optical characteristics (Ocean Optics USB200), water and osmotic potentials, leaf area and weight and carbon isotope ratio. In the presentation, we highlight results on the potential use of high throughput plant phenotyping techniques to assess the morphology and physiology of woody species including responses to water availability and nutrient manipulation, and its broader application under field conditions and natural resources management. Also, we explore the different capabilities imaging provides us for modeling the plant physiological and morphological growth and how it can complement the current techniques
Geum, Dae-Myeong; Park, Min-Su; Lim, Ju Young; Yang, Hyun-Duk; Song, Jin Dong; Kim, Chang Zoo; Yoon, Euijoon; Kim, Sanghyeon; Choi, Won Jun
Si-based integrated circuits have been intensively developed over the past several decades through ultimate device scaling. However, the Si technology has reached the physical limitations of the scaling. These limitations have fuelled the search for alternative active materials (for transistors) and the introduction of optical interconnects (called “Si photonics”). A series of attempts to circumvent the Si technology limits are based on the use of III-V compound semiconductor due to their superior benefits, such as high electron mobility and direct bandgap. To use their physical properties on a Si platform, the formation of high-quality III-V films on the Si (III-V/Si) is the basic technology ; however, implementing this technology using a high-throughput process is not easy. Here, we report new concepts for an ultra-high-throughput heterogeneous integration of high-quality III-V films on the Si using the wafer bonding and epitaxial lift off (ELO) technique. We describe the ultra-fast ELO and also the re-use of the III-V donor wafer after III-V/Si formation. These approaches provide an ultra-high-throughput fabrication of III-V/Si substrates with a high-quality film, which leads to a dramatic cost reduction. As proof-of-concept devices, this paper demonstrates GaAs-based high electron mobility transistors (HEMTs), solar cells, and hetero-junction phototransistors on Si substrates.
Full Text Available With the introduction of cost effective, rapid and superior quality next generation sequencing (NGS techniques, gene expression analysis has become viable for labs conducting small projects as well as large-scale gene expression analysis experiments. However, the available protocols for construction of RNA-Sequencing (RNA-Seq libraries are expensive and/or difficult to scale for high-throughput applications. Also, most protocols require isolated total RNA as a starting point. We provide a cost-effective RNA-Seq library synthesis protocol that is fast, starts with tissue, and is high-throughput from tissue to synthesized library. We have also designed and report a set of 96 unique barcodes for library adapters that are amenable to high-throughput sequencing by a large combination of multiplexing strategies. Our developed protocol has more power to detect differentially expressed genes when compared to the standard Illumina protocol, probably owing to less technical variation amongst replicates. We also address the problem of gene-length biases affecting differential gene expression calls and demonstrate that such biases can be efficiently minimized during mRNA isolation for library preparation.
Sykes, Robert; Yung, Matthew; Novaes, Evandro; Kirst, Matias; Peter, Gary; Davis, Mark
We describe a high-throughput method for estimating cell-wall chemistry traits using analytical pyrolysis. The instrument used to perform the high-throughput cell-wall chemistry analysis consists of a commercially available pyrolysis unit and autosampler coupled to a custom-built molecular beam mass spectrometer. The system is capable of analyzing approximately 42 biomass samples per hour. Lignin content and syringyl to guaiacol (S/G) ratios can be estimated directly from the spectra and differences in cell wall chemistry in large groups of samples can easily be identified using multivariate statistical data analysis methods. The utility of the system is demonstrated on a set of 800 greenhouse-grown poplar trees grown under two contrasting nitrogen treatments. High-throughput analytical pyrolysis was able to determine that the lignin content varied between 13 and 28% and the S/G ratio ranged from 0.5 to 1.5. There was more cell-wall chemistry variation in the plants grown under high nitrogen conditions than trees grown under nitrogen-deficiency conditions. Analytical pyrolysis allows the user to rapidly screen large numbers of samples at low cost, using very little sample material while producing reliable and reproducible results.
Full Text Available The identification and engineering of proteins having refined or novel characteristics is an important area of research in many scientific fields. Protein modelling has enabled the rational design of unique proteins, but high-throughput screening of large libraries is still required to identify proteins with potentially valuable properties. Here we report on the development and evaluation of a novel fluorescent activated cell sorting based screening platform. Single bacterial cells, expressing a protein library to be screened, are electronically sorted and deposited onto plates containing solid nutrient growth media in a dense matrix format of between 44 and 195 colonies/cm2. We show that this matrix format is readily applicable to machine interrogation (<30 seconds per plate and subsequent bioinformatic analysis (~60 seconds per plate thus enabling the high-throughput screening of the protein library. We evaluate this platform and show that bacteria containing a bioluminescent protein can be spectrally analysed using an optical imager, and a rare clone (0.5% population can successfully be identified, picked and further characterised. To further enhance this screening platform, we have developed a prototype electronic sort stream multiplexer, that when integrated into a commercial flow cytometric sorter, increases the rate of colony deposition by 89.2% to 24 colonies per second. We believe that the screening platform described here is potentially the foundation of a new generation of high-throughput screening technologies for proteins.
Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen
Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.
Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett
We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.
Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth
The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. email@example.com or firstname.lastname@example.org. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com
Liu, Zhen; Xu, Jian-hong
High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.
Full Text Available Proteomics for biomarker validation needs high throughput instrumentation to analyze huge set of clinical samples for quantitative and reproducible analysis at a minimum time without manual experimental errors. Sample preparation, a vital step in proteomics plays a major role in identification and quantification of proteins from biological samples. Tryptic digestion a major check point in sample preparation for mass spectrometry based proteomics needs to be more accurate with rapid processing time. The present study focuses on establishing a high throughput automated online system for proteolytic digestion and desalting of proteins from biological samples quantitatively and qualitatively in a reproducible manner. The present study compares online protein digestion and desalting of BSA with conventional off-line (in-solution method and validated for real time sample for reproducibility. Proteins were identified using SEQUEST data base search engine and the data were quantified using IDEALQ software. The present study shows that the online system capable of handling high throughput samples in 96 well formats carries out protein digestion and peptide desalting efficiently in a reproducible and quantitative manner. Label free quantification showed clear increase of peptide quantities with increase in concentration with much linearity compared to off line method. Hence we would like to suggest that inclusion of this online system in proteomic pipeline will be effective in quantification of proteins in comparative proteomics were the quantification is really very crucial.
Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter
Pinzon, NM; Aukema, KG; Gralnick, JA; Wackett, LP
A method for use in high-throughput screening of bacteria for the production of long-chain hydrocarbons and ketones by monitoring fluorescent light emission in the presence of Nile red is described. Nile red has previously been used to screen for polyhydroxybutyrate (PHB) and fatty acid esters, but this is the first report of screening for recombinant bacteria making hydrocarbons or ketones. The microtiter plate assay was evaluated using wild-type and recombinant strains of Shewanella oneidensis and Escherichia coli expressing the enzyme OleA, previously shown to initiate hydrocarbon biosynthesis. The strains expressing exogenous Stenotrophomonas maltophilia oleA, with increased levels of ketone production as determined by gas chromatography-mass spectrometry, were distinguished with Nile red fluorescence. Confocal microscopy images of S. oneidensis oleA-expressing strains stained with Nile red were consistent with a membrane localization of the ketones. This differed from Nile red staining of bacterial PHB or algal lipid droplets that showed intracellular inclusion bodies. These results demonstrated the applicability of Nile red in a high-throughput technique for the detection of bacterial hydrocarbons and ketones. IMPORTANCE In recent years, there has been renewed interest in advanced biofuel sources such as bacterial hydrocarbon production. Previous studies used solvent extraction of bacterial cultures followed by gas chromatography-mass spectrometry (GC-MS) to detect and quantify ketones and hydrocarbons (Beller HR, Goh EB, Keasling JD, Appl. Environ. Microbiol. 76: 1212-1223, 2010; Sukovich DJ, Seffernick JL, Richman JE, Gralnick JA, Wackett LP, Appl. Environ. Microbiol. 76: 3850-3862, 2010). While these analyses are powerful and accurate, their labor-intensive nature makes them intractable to high-throughput screening; therefore, methods for rapid identification of bacterial strains that are overproducing hydrocarbons are needed. The use of high-throughput
Klukas, Christian; Chen, Dijun; Pape, Jean-Michel
High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays 'Fernandez') plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. © 2014 American Society of Plant Biologists. All Rights Reserved.
Herington, Jennifer L; Swale, Daniel R; Brown, Naoko; Shelton, Elaine L; Choi, Hyehun; Williams, Charles H; Hong, Charles C; Paria, Bibhash C; Denton, Jerod S; Reese, Jeff
The uterine myometrium (UT-myo) is a therapeutic target for preterm labor, labor induction, and postpartum hemorrhage. Stimulation of intracellular Ca2+-release in UT-myo cells by oxytocin is a final pathway controlling myometrial contractions. The goal of this study was to develop a dual-addition assay for high-throughput screening of small molecular compounds, which could regulate Ca2+-mobilization in UT-myo cells, and hence, myometrial contractions. Primary murine UT-myo cells in 384-well plates were loaded with a Ca2+-sensitive fluorescent probe, and then screened for inducers of Ca2+-mobilization and inhibitors of oxytocin-induced Ca2+-mobilization. The assay exhibited robust screening statistics (Z´ = 0.73), DMSO-tolerance, and was validated for high-throughput screening against 2,727 small molecules from the Spectrum, NIH Clinical I and II collections of well-annotated compounds. The screen revealed a hit-rate of 1.80% for agonist and 1.39% for antagonist compounds. Concentration-dependent responses of hit-compounds demonstrated an EC50 less than 10μM for 21 hit-antagonist compounds, compared to only 7 hit-agonist compounds. Subsequent studies focused on hit-antagonist compounds. Based on the percent inhibition and functional annotation analyses, we selected 4 confirmed hit-antagonist compounds (benzbromarone, dipyridamole, fenoterol hydrobromide and nisoldipine) for further analysis. Using an ex vivo isometric contractility assay, each compound significantly inhibited uterine contractility, at different potencies (IC50). Overall, these results demonstrate for the first time that high-throughput small-molecules screening of myometrial Ca2+-mobilization is an ideal primary approach for discovering modulators of uterine contractility.
Full Text Available Nuclear entry and exit of the NF-κB family of dimeric transcription factors plays an essential role in regulating cellular responses to inflammatory stress. The dynamics of this nuclear translocation can vary significantly within a cell population and may dramatically change e.g. upon drug exposure. Furthermore, there is significant heterogeneity in individual cell response upon stress signaling. In order to systematically determine factors that define NF-κB translocation dynamics, high-throughput screens that enable the analysis of dynamic NF-κB responses in individual cells in real time are essential. Thus far, only NF-κB downstream signaling responses of whole cell populations at the transcriptional level are in high-throughput mode. In this study, we developed a fully automated image analysis method to determine the time-course of NF-κB translocation in individual cells, suitable for high-throughput screenings in the context of compound screening and functional genomics. Two novel segmentation methods were used for defining the individual nuclear and cytoplasmic regions: watershed masked clustering (WMC and best-fit ellipse of Voronoi cell (BEVC. The dynamic NFκB oscillatory response at the single cell and population level was coupled to automated extraction of 26 analogue translocation parameters including number of peaks, time to reach each peak, and amplitude of each peak. Our automated image analysis method was validated through a series of statistical tests demonstrating computational efficient and accurate NF-κB translocation dynamics quantification of our algorithm. Both pharmacological inhibition of NF-κB and short interfering RNAs targeting the inhibitor of NFκB, IκBα, demonstrated the ability of our method to identify compounds and genetic players that interfere with the nuclear transition of NF-κB.
Shannon M Clarke
Full Text Available Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.
Herington, Jennifer L.; Swale, Daniel R.; Brown, Naoko; Shelton, Elaine L.; Choi, Hyehun; Williams, Charles H.; Hong, Charles C.; Paria, Bibhash C.; Denton, Jerod S.; Reese, Jeff
The uterine myometrium (UT-myo) is a therapeutic target for preterm labor, labor induction, and postpartum hemorrhage. Stimulation of intracellular Ca2+-release in UT-myo cells by oxytocin is a final pathway controlling myometrial contractions. The goal of this study was to develop a dual-addition assay for high-throughput screening of small molecular compounds, which could regulate Ca2+-mobilization in UT-myo cells, and hence, myometrial contractions. Primary murine UT-myo cells in 384-well plates were loaded with a Ca2+-sensitive fluorescent probe, and then screened for inducers of Ca2+-mobilization and inhibitors of oxytocin-induced Ca2+-mobilization. The assay exhibited robust screening statistics (Z´ = 0.73), DMSO-tolerance, and was validated for high-throughput screening against 2,727 small molecules from the Spectrum, NIH Clinical I and II collections of well-annotated compounds. The screen revealed a hit-rate of 1.80% for agonist and 1.39% for antagonist compounds. Concentration-dependent responses of hit-compounds demonstrated an EC50 less than 10μM for 21 hit-antagonist compounds, compared to only 7 hit-agonist compounds. Subsequent studies focused on hit-antagonist compounds. Based on the percent inhibition and functional annotation analyses, we selected 4 confirmed hit-antagonist compounds (benzbromarone, dipyridamole, fenoterol hydrobromide and nisoldipine) for further analysis. Using an ex vivo isometric contractility assay, each compound significantly inhibited uterine contractility, at different potencies (IC50). Overall, these results demonstrate for the first time that high-throughput small-molecules screening of myometrial Ca2+-mobilization is an ideal primary approach for discovering modulators of uterine contractility. PMID:26600013
Varshney, Gaurav K; Pei, Wuhong; LaFave, Matthew C; Idol, Jennifer; Xu, Lisha; Gallardo, Viviana; Carrington, Blake; Bishop, Kevin; Jones, MaryPat; Li, Mingyu; Harper, Ursula; Huang, Sunny C; Prakash, Anupam; Chen, Wenbiao; Sood, Raman; Ledin, Johan; Burgess, Shawn M
The use of CRISPR/Cas9 as a genome-editing tool in various model organisms has radically changed targeted mutagenesis. Here, we present a high-throughput targeted mutagenesis pipeline using CRISPR/Cas9 technology in zebrafish that will make possible both saturation mutagenesis of the genome and large-scale phenotyping efforts. We describe a cloning-free single-guide RNA (sgRNA) synthesis, coupled with streamlined mutant identification methods utilizing fluorescent PCR and multiplexed, high-throughput sequencing. We report germline transmission data from 162 loci targeting 83 genes in the zebrafish genome, in which we obtained a 99% success rate for generating mutations and an average germline transmission rate of 28%. We verified 678 unique alleles from 58 genes by high-throughput sequencing. We demonstrate that our method can be used for efficient multiplexed gene targeting. We also demonstrate that phenotyping can be done in the F1 generation by inbreeding two injected founder fish, significantly reducing animal husbandry and time. This study compares germline transmission data from CRISPR/Cas9 with those of TALENs and ZFNs and shows that efficiency of CRISPR/Cas9 is sixfold more efficient than other techniques. We show that the majority of published "rules" for efficient sgRNA design do not effectively predict germline transmission rates in zebrafish, with the exception of a GG or GA dinucleotide genomic match at the 5' end of the sgRNA. Finally, we show that predicted off-target mutagenesis is of low concern for in vivo genetic studies. © 2015 Varshney et al.; Published by Cold Spring Harbor Laboratory Press.
LS Moreira Teixeira
Full Text Available Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous matrix deposition and remodelling. To address this issue, we designed a micro-mould to enable controlled high-throughput formation of micro-aggregates. Morphology, stability, gene expression profiles and chondrogenic potential of micro-aggregates of human and bovine chondrocytes were evaluated and compared to single-cells cultured in micro-wells and in 3D after encapsulation in Dextran-Tyramine (Dex-TA hydrogels in vitro and in vivo. We successfully formed micro-aggregates of human and bovine chondrocytes with highly controlled size, stability and viability within 24 hours. Micro-aggregates of 100 cells presented a superior balance in Collagen type I and Collagen type II gene expression over single cells and micro-aggregates of 50 and 200 cells. Matrix metalloproteinases 1, 9 and 13 mRNA levels were decreased in micro-aggregates compared to single-cells. Histological and biochemical analysis demonstrated enhanced matrix deposition in constructs seeded with micro-aggregates cultured in vitro and in vivo, compared to single-cell seeded constructs. Whole genome microarray analysis and single gene expression profiles using human chondrocytes confirmed increased expression of cartilage-related genes when chondrocytes were cultured in micro-aggregates. In conclusion, we succeeded in controlled high-throughput formation of micro-aggregates of chondrocytes. Compared to single cell-seeded constructs, seeding of constructs with micro-aggregates greatly improved neo-cartilage formation. Therefore, micro-aggregation prior to chondrocyte implantation in current MACI procedures, may effectively accelerate hyaline cartilage formation.
Moreira Teixeira, L S; Leijten, J C H; Sobral, J; Jin, R; van Apeldoorn, A A; Feijen, J; van Blitterswijk, C; Dijkstra, P J; Karperien, M
Cell-based cartilage repair strategies such as matrix-induced autologous chondrocyte implantation (MACI) could be improved by enhancing cell performance. We hypothesised that micro-aggregates of chondrocytes generated in high-throughput prior to implantation in a defect could stimulate cartilaginous matrix deposition and remodelling. To address this issue, we designed a micro-mould to enable controlled high-throughput formation of micro-aggregates. Morphology, stability, gene expression profiles and chondrogenic potential of micro-aggregates of human and bovine chondrocytes were evaluated and compared to single-cells cultured in micro-wells and in 3D after encapsulation in Dextran-Tyramine (Dex-TA) hydrogels in vitro and in vivo. We successfully formed micro-aggregates of human and bovine chondrocytes with highly controlled size, stability and viability within 24 hours. Micro-aggregates of 100 cells presented a superior balance in Collagen type I and Collagen type II gene expression over single cells and micro-aggregates of 50 and 200 cells. Matrix metalloproteinases 1, 9 and 13 mRNA levels were decreased in micro-aggregates compared to single-cells. Histological and biochemical analysis demonstrated enhanced matrix deposition in constructs seeded with micro-aggregates cultured in vitro and in vivo, compared to single-cell seeded constructs. Whole genome microarray analysis and single gene expression profiles using human chondrocytes confirmed increased expression of cartilage-related genes when chondrocytes were cultured in micro-aggregates. In conclusion, we succeeded in controlled high-throughput formation of micro-aggregates of chondrocytes. Compared to single cell-seeded constructs, seeding of constructs with micro-aggregates greatly improved neo-cartilage formation. Therefore, micro-aggregation prior to chondrocyte implantation in current MACI procedures, may effectively accelerate hyaline cartilage formation.
Chang, Lingqian; Bertani, Paul; Gallego-Perez, Daniel; Yang, Zhaogang; Chen, Feng; Chiang, Chiling; Malkoc, Veysi; Kuang, Tairong; Gao, Keliang; Lee, L. James; Lu, Wu
Of great interest to modern medicine and biomedical research is the ability to inject individual target cells with the desired genes or drug molecules. Some advances in cell electroporation allow for high throughput, high cell viability, or excellent dosage control, yet no platform is available for the combination of all three. In an effort to solve this problem, here we show a ``3D nano-channel electroporation (NEP) chip'' on a silicon platform designed to meet these three criteria. This NEP chip can simultaneously deliver the desired molecules into 40 000 cells per cm2 on the top surface of the device. Each 650 nm pore aligns to a cell and can be used to deliver extremely small biological elements to very large plasmids (>10 kbp). When compared to conventional bulk electroporation (BEP), the NEP chip shows a 20 fold improvement in dosage control and uniformity, while still maintaining high cell viability (>90%) even in cells such as cardiac cells which are characteristically difficult to transfect. This high-throughput 3D NEP system provides an innovative and medically valuable platform with uniform and reliable cellular transfection, allowing for a steady supply of healthy, engineered cells.Of great interest to modern medicine and biomedical research is the ability to inject individual target cells with the desired genes or drug molecules. Some advances in cell electroporation allow for high throughput, high cell viability, or excellent dosage control, yet no platform is available for the combination of all three. In an effort to solve this problem, here we show a ``3D nano-channel electroporation (NEP) chip'' on a silicon platform designed to meet these three criteria. This NEP chip can simultaneously deliver the desired molecules into 40 000 cells per cm2 on the top surface of the device. Each 650 nm pore aligns to a cell and can be used to deliver extremely small biological elements to very large plasmids (>10 kbp). When compared to conventional bulk
Canela, Andrés; Vera, Elsa; Klatt, Peter; Blasco, María A
A major limitation of studies of the relevance of telomere length to cancer and age-related diseases in human populations and to the development of telomere-based therapies has been the lack of suitable high-throughput (HT) assays to measure telomere length. We have developed an automated HT quantitative telomere FISH platform, HT quantitative FISH (Q-FISH), which allows the quantification of telomere length as well as percentage of short telomeres in large human sample sets. We show here that this technique provides the accuracy and sensitivity to uncover associations between telomere length and human disease.
Full Text Available The tendency for mycobacteria to aggregate poses a challenge for their use in microplate based assays. Good dispersions have been difficult to achieve in high-throughput screening (HTS assays used in the search for novel antibacterial drugs to treat tuberculosis and other related diseases. Here we describe a method using filtration to overcome the problem of variability resulting from aggregation of mycobacteria. This method consistently yielded higher reproducibility and lower variability than conventional methods, such as settling under gravity and vortexing.
WH (1948) The formation of bacterial viruses in bacteria rendered non-viable by mustard gas . J Gen Physiol 32(1):63–68 Jian SL, Hsieh HY, Liao CT...Medical defense against mustard gas : toxic mechanisms and pharmaco- logical implications. CRC Press, Boca Raton Popp T, Egea V, Kehe K et al (2011) Sulfur...Project ID Number CBM.CUTOC.04.10. RC 00114. ABSTRACT See reprint. 15. SUBJECT TERMS sulfur mustard , cutaneous injury, siRNA, high-throughput screening
Sandra L Spurgeon
Full Text Available We describe a high throughput gene expression platform based on microfluidic dynamic arrays. This system allows 2,304 simultaneous real time PCR gene expression measurements in a single chip, while requiring less pipetting than is required to set up a 96 well plate. We show that one can measure the expression of 45 different genes in 18 tissues with replicates in a single chip. The data have excellent concordance with conventional real time PCR and the microfluidic dynamic arrays show better reproducibility than commercial DNA microarrays.
Poulsen, Esben Guldahl; Nielsen, Sofie V.; Pietras, Elin J.
The ubiquitin-proteasome system is the major pathway for intracellular protein degradation in eukaryotic cells. Due to the large number of genes dedicated to the ubiquitin-proteasome system, mapping degradation pathways for short lived proteins is a daunting task, in particular in mammalian cells...... that are not genetically tractable as, for instance, a yeast model system. Here, we describe a method relying on high-throughput cellular imaging of cells transfected with a targeted siRNA library to screen for components involved in degradation of a protein of interest. This method is a rapid and cost-effective tool...
Bugel, Sean M; Tanguay, Robert L; Planchart, Antonio
The evolutionary conservation of genomic, biochemical and developmental features between zebrafish and humans is gradually coming into focus with the end result that the zebrafish embryo model has emerged as a powerful tool for uncovering the effects of environmental exposures on a multitude of biological processes with direct relevance to human health. In this review, we highlight advances in automation, high-throughput (HT) screening, and analysis that leverage the power of the zebrafish embryo model for unparalleled advances in our understanding of how chemicals in our environment affect our health and wellbeing.
Raijada, Dhara; Cornett, Claus; Rantanen, Jukka
selected. Binary physical mixtures of drug and excipient were transferred to a 96-well plate followed by addition of water to simulate aqueous granulation environment. The plate was subjected for XRPD measurements followed by drying and subsequent XRPD and HPLC measurements of the dried samples. Excipients...... for chemical degradation. The proposed high-throughput platform can be used during early drug development to simulate typical processing induced stress in a small scale and to understand possible phase transformation behaviour and influence of excipients on this....
Theda, Christiane; Gibbons, Katy; Defor, Todd E; Donohue, Pamela K; Golden, W Christopher; Kline, Antonie D; Gulamali-Majid, Fizza; Panny, Susan R; Hubbard, Walter C; Jones, Richard O; Liu, Anita K; Moser, Ann B; Raymond, Gerald V
X-linked adrenoleukodystrophy (ALD) is characterized by adrenal insufficiency and neurologic involvement with onset at variable ages. Plasma very long chain fatty acids are elevated in ALD; even in asymptomatic patients. We demonstrated previously that liquid chromatography tandem mass spectrometry measuring C26:0 lysophosphatidylcholine reliably identifies affected males. We prospectively applied this method to 4689 newborn blood spot samples; no false positives were observed. We show that high throughput neonatal screening for ALD is methodologically feasible. Copyright © 2013 Elsevier Inc. All rights reserved.
Kirkwood, Jobie; Wilson, Julie; O'keefe, Simon; Hargreaves, David
The crystallization of proteins is dependent on the careful control of numerous parameters, one of these being pH. The pH of crystallization is generally reported as that of the buffer; however, the true pH has been found to be as many as four pH units away. Measurement of pH with a meter is time-consuming and requires the reformatting of the crystallization solution. To overcome this, a high-throughput method for pH determination of buffered solutions has been developed with results comparab...
Boso, Gianluca; Tosi, Alberto, E-mail: firstname.lastname@example.org; Zappa, Franco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy); Mora, Alberto Dalla [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy)
We present the design and characterization of a high-throughput gated photon counter able to count electrical pulses occurring within two well-defined and programmable detection windows. We extensively characterized and validated this instrument up to 100 Mcounts/s and with detection window width down to 70 ps. This instrument is suitable for many applications and proves to be a cost-effective and compact alternative to time-correlated single-photon counting equipment, thanks to its easy configurability, user-friendly interface, and fully adjustable settings via a Universal Serial Bus (USB) link to a remote computer.
Pedersen, Marlene Lemvig; Block, Ines; List, Markus
multiplexing readouts, but this has a natural limitation. High-content screening via image acquisition and analysis allows multiplexing of few parameters, but is connected to substantial time consumption and complex logistics. We report on integration of Reverse Phase Protein Arrays (RPPA)-based readouts...... into automated robotic high-throughput screens, which allows subsequent protein quantification. In this integrated solution, samples are directly forwarded to automated cell lysate preparation and preparation of dilution series, including reformatting to a protein spotter-compatible format after the high...
Kim, Eung-Sam; Ahn, Eun Hyun; Chung, Euiheon; Kim, Deok-Ho
Nanotechnology-based tools are beginning to emerge as promising platforms for quantitative high-throughput analysis of live cells and tissues. Despite unprecedented progress made over the last decade, a challenge still lies in integrating emerging nanotechnology-based tools into macroscopic biomedical apparatuses for practical purposes in biomedical sciences. In this review, we discuss the recent advances and limitations in the analysis and control of mechanical, biochemical, fluidic, and optical interactions in the interface areas of nanotechnology-based materials and living cells in both in vitro and in vivo settings. PMID:24258011
Zhang, S. H.; Zhang, R. F.
The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code  and the structure is then redefined to unit cell with IEEE-format . Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated
Lundberg, Martin; Thorsen, Stine Buch; Assarsson, Erika
A high throughput protein biomarker discovery tool has been developed based on multiplexed proximity ligation assays (PLA) in a homogeneous format in the sense of no washing steps. The platform consists of four 24-plex panels profiling 74 putative biomarkers with sub pM sensitivity each consuming...... sequences are united by DNA ligation upon simultaneous target binding forming a PCR amplicon. Multiplex PLA thereby converts multiple target analytes into real-time PCR amplicons that are individually quantificatied using microfluidic high capacity qPCR in nano liter volumes. The assay shows excellent...
Conery, Annie L; Larkins-Ford, Jonah; Ausubel, Frederick M; Kirienko, Natalia V
In recent history, the nematode Caenorhabditis elegans has provided a compelling platform for the discovery of novel antimicrobial drugs. In this protocol, we present an automated, high-throughput C. elegans pathogenesis assay, which can be used to screen for anti-infective compounds that prevent nematodes from dying due to Pseudomonas aeruginosa. New antibiotics identified from such screens would be promising candidates for treatment of human infections, and also can be used as probe compounds to identify novel targets in microbial pathogenesis or host immunity. Copyright © 2014 John Wiley & Sons, Inc.
Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.
We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.
Kang, Aram; Meadows, Corey W.; Canu, Nicolas
ATP requirements and isopentenyl diphosphate (IPP) toxicity pose immediate challenges for engineering bacterial strains to overproduce commodities utilizing IPP as an intermediate. To overcome these limitations, we developed an â€œIPP-bypassâ€� isopentenol pathway using the promiscuous activity...... the endogenous non-mevalonate pathway, we developed a high-throughput screening platform that correlated promiscuous PMD activity toward MVAP with cellular growth. Successful identification of mutants that altered PMD activity demonstrated the sensitivity and specificity of the screening platform. Strains...
The use of high-throughput screening allowed for the optimization of reaction conditions for the palladium-catalyzed asymmetric decarboxylative alkylation reaction of enolate-stabilized enol carbonates. Changing to a non-polar reaction solvent and to an electron-deficient PHOX derivative as ligand from our standard reaction conditions improved the enantioselectivity for the alkylation of a ketal-protected,1,3-diketone-derived enol carbonate from 28% ee to 84% ee. Similar improvements in enantioselectivity were seen for a β-keto-ester derived- and an α-phenyl cyclohexanone-derived enol carbonate.
Adams, Jonathan D; Ebbesen, Christian L.; Barnkob, Rune
We report a temperature-controlled microfluidic acoustophoresis device capable of separating particles and transferring blood cells from undiluted whole human blood at a volume throughput greater than 1 L h−1. The device is fabricated from glass substrates and polymer sheets in microscope......-slide format using low-cost, rapid-prototyping techniques. This high-throughput acoustophoresis chip (HTAC) utilizes a temperature-stabilized, standing ultrasonic wave, which imposes differential acoustic radiation forces that can separate particles according to size, density and compressibility. The device...
Zarkevich, N. A.; Johnson, D. D.; Pecharsky, V. K.
The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. We begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.
Korenková, Vlasta; Scott, J.; Novosadová, Vendula; Jindřichová, Marie; Langerová, Lucie; Švec, David; Šídová, Monika; Sjoback, R.
Roč. 16, č. 5 (2015) ISSN 1471-2199 R&D Projects: GA ČR(CZ) GAP304/12/1585; GA ČR(CZ) GA15-08239S; GA ČR GA13-02154S; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:86652036 Keywords : High-throughput qPCR * Gene expression * Exponential pre-amplification Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.500, year: 2015
Kim, Sung-Hou [Moraga, CA; Kim, Rosalind [Moraga, CA; Jancarik, Jamila [Walnut Creek, CA
An optimum solubility screen in which a panel of buffers and many additives are provided in order to obtain the most homogeneous and monodisperse protein condition for protein crystallization. The present methods are useful for proteins that aggregate and cannot be concentrated prior to setting up crystallization screens. A high-throughput method using the hanging-drop method and vapor diffusion equilibrium and a panel of twenty-four buffers is further provided. Using the present methods, 14 poorly behaving proteins have been screened, resulting in 11 of the proteins having highly improved dynamic light scattering results allowing concentration of the proteins, and 9 were crystallized.
List, Markus; Schmidt, Steffen; Christiansen, Helle
High-throughput screening (HTS) is an indispensable tool for drug (target) discovery that currently lacks user-friendly software tools for the robust identification of putative hits from HTS experiments and for the interpretation of these findings in the context of systems biology. We developed H...... novel hypotheses for follow-up experiments: (i) a genome-wide RNAi screen to uncover modulators of TNFα, (ii) a combined siRNA and miRNA mimics screen on vorinostat resistance and (iii) a small compound screen on KRAS synthetic lethality. HiTSeekR is publicly available at http...
Vergauwen, Lucia; Nørgaard Schmidt, Stine; Stinckens, Evelyn
High throughput testing according to the Fish Embryo Acute Toxicity (FET) test (OECD Testing Guideline 236) is usually conducted in well plates. In the case of hydrophobic test substances, sorptive and evaporative losses often result in declining and poorly controlled exposure conditions. Therefore......, our objective was to improve exposure conditions in FET tests by evaluating a passive dosing format using silicone O-rings in standard 24-well polystyrene plates. We exposed zebrafish embryos to a series of phenanthrene concentrations until 120 h post fertilization (hpf), and obtained a linear...
U.S. Environmental Protection Agency — Under the ExpoCast program, United States Environmental Protection Agency (EPA) researchers have developed a high-throughput (HT) framework for estimating aggregate...
Ajay Athavale (Monsanto) presents "High Throughput Plasmid Sequencing with Illumina and CLC Bio" at the 7th Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting held in June, 2012 in Santa Fe, NM.
Veltman, J.; Schoenmakers, E.F.P.M.; Eussen, B.H.; Janssen, I.M.; Merkx, G.F.M.; Cleef, B. van; Ravenswaaij-Arts, C.M.A. van; Brunner, H.G.; Smeets, D.F.C.M.; Geurts van Kessel, A.H.M.
Telomeric chromosome rearrangements may cause mental retardation, congenital anomalies, and miscarriages. Automated detection of subtle deletions or duplications involving telomeres is essential for high-throughput diagnosis, but impossible when conventional cytogenetic methods are used. Array-based
Scott Jordan on "Advances in high-throughput speed, low-latency communication for embedded instrumentation" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.
U.S. Environmental Protection Agency — httk: High-Throughput Toxicokinetics Functions and data tables for simulation and statistical analysis of chemical toxicokinetics ("TK") using data obtained from...
State of the Art High-Throughput Approaches to Genotoxicity: Flow Micronucleus, Ames II, GreenScreen and Comet (Presented by Dr. Marilyn J. Aardema, Chief Scientific Advisor, Toxicology, Dr. Leon Stankowski, et. al. (6/28/2012)
Ryvkin, Paul; Leung, Yuk Yee; Ungar, Lyle H.; Gregory, Brian D.; Wang, Li-San
Recent advances in high-throughput sequencing allow researchers to examine the transcriptome in more detail than ever before. Using a method known as high-throughput small RNA-sequencing, we can now profile the expression of small regulatory RNAs such as microRNAs and small interfering RNAs (siRNAs) with a great deal of sensitivity. However, there are many other types of small RNAs (
Burdick, Jared; Alonas, Eric; Huang, H-C; Rege, Kaushal; Wang, Joseph
A cost-effective, high-throughput method for generating gold nanowires and/or nanorods based on a multisegment template electrodeposition approach is described. Using this method, multiple nanowires/nanorods can be generated from a single pore of alumina template membranes by alternately depositing segments of desirable (e.g., gold) and non-desirable metals (e.g., silver), followed by dissolution of the template and the non-desirable metal. Critical cost analysis indicates substantial savings in material requirements, processing times, and processing costs compared to the commonly used single-segment method. In addition to solid gold nanowires/nanorods, high yields of porous gold nanowires/nanorods are obtained by depositing alternate segments of gold-silver alloy and silver from the same gold-silver plating solution followed by selective dissolution of the silver from both segments. It is anticipated that this high-throughput method for synthesizing solid and porous gold nanowires and nanorods will accelerate their use in sensing, electronic, and biomedical applications.
Sheppod, Timothy; Satterfield, Brent; Hukari, Kyle W.; West, Jason A. A.; Hux, Gary A.
The advancement of DNA cloning has significantly augmented the potential threat of a focused bioweapon assault, such as a terrorist attack. With current DNA cloning techniques, toxin genes from the most dangerous (but environmentally labile) bacterial or viral organism can now be selected and inserted into robust organism to produce an infinite number of deadly chimeric bioweapons. In order to neutralize such a threat, accurate detection of the expressed toxin genes, rather than classification on strain or genealogical decent of these organisms, is critical. The development of a high-throughput microarray approach will enable the detection of unknowns chimeric bioweapons. The development of a high-throughput microarray approach will enable the detection of unknown bioweapons. We have developed a unique microfluidic approach to capture and concentrate these threat genes (mRNA's) upto a 30 fold concentration. These captured oligonucleotides can then be used to synthesize in situ oligonucleotide copies (cDNA probes) of the captured genes. An integrated microfluidic architecture will enable us to control flows of reagents, perform clean-up steps and finally elute nanoliter volumes of synthesized oligonucleotides probes. The integrated approach has enabled a process where chimeric or conventional bioweapons can rapidly be identified based on their toxic function, rather than being restricted to information that may not identify the critical nature of the threat.
Boyd, Windy A.; McBride, Sandra J.; Rice, Julie R.; Snyder, Daniel W.; Freedman, Jonathan H.
The National Research Council has outlined the need for non-mammalian toxicological models to test the potential health effects of a large number of chemicals while also reducing the use of traditional animal models. The nematode Caenorhabditis elegans is an attractive alternative model because of its well-characterized and evolutionarily conserved biology, low cost, and ability to be used in high-throughput screening. A high-throughput method is described for quantifying the reproductive capacity of C. elegans exposed to chemicals for 48 h from the last larval stage (L4) to adulthood using a COPAS Biosort. Initially, the effects of exposure conditions that could influence reproduction were defined. Concentrations of DMSO vehicle ≤ 1% did not affect reproduction. Previous studies indicated that C. elegans may be influenced by exposure to low pH conditions. At pHs greater than 4.5, C. elegans reproduction was not affected; however below this pH there was a significant decrease in the number of offspring. Cadmium chloride was chosen as a model toxicant to verify that automated measurements were comparable to those of traditional observational studies. EC 50 values for cadmium for automated measurements (176-192 μM) were comparable to those previously reported for a 72-h exposure using manual counting (151 μM). The toxicity of seven test toxicants on C. elegans reproduction was highly correlative with rodent lethality suggesting that this assay may be useful in predicting the potential toxicity of chemicals in other organisms.
Yang, Kewei; Delaney, Joseph T; Schubert, Ulrich S; Fahr, Alfred
A new strategy for fast, convenient high-throughput screening of liposomal formulations was developed, utilizing the automation of the so-called ethanol-injection method. This strategy was illustrated by the preparation and screening of the liposomal formulation library of a potent second-generation photosensitizer, temoporfin. Numerous liposomal formulations were efficiently prepared using a pipetting robot, followed by automated size characterization, using a dynamic light scattering plate reader. Incorporation efficiency of temoporfin and zeta potential were also detected in selected cases. To optimize the formulation, different parameters were investigated, including lipid types, lipid concentration in injected ethanol, ratio of ethanol to aqueous solution, ratio of drug to lipid, and the addition of functional phospholipid. Step-by-step small liposomes were prepared with high incorporation efficiency. At last, an optimized formulation was obtained for each lipid in the following condition: 36.4 mg·mL(-1) lipid, 13.1 mg·mL(-1) mPEG(2000)-DSPE, and 1:4 ethanol:buffer ratio. These liposomes were unilamellar spheres, with a diameter of approximately 50 nm, and were very stable for over 20 weeks. The results illustrate this approach to be promising for fast high-throughput screening of liposomal formulations.
Lucena Severino A
Full Text Available Abstract Background The description of new hydrolytic enzymes is an important step in the development of techniques which use lignocellulosic materials as a starting point for fuel production. Sugarcane bagasse, which is subjected to pre-treatment, hydrolysis and fermentation for the production of ethanol in several test refineries, is the most promising source of raw material for the production of second generation renewable fuels in Brazil. One problem when screening hydrolytic activities is that the activity against commercial substrates, such as carboxymethylcellulose, does not always correspond to the activity against the natural lignocellulosic material. Besides that, the macroscopic characteristics of the raw material, such as insolubility and heterogeneity, hinder its use for high throughput screenings. Results In this paper, we present the preparation of a colloidal suspension of particles obtained from sugarcane bagasse, with minimal chemical change in the lignocellulosic material, and demonstrate its use for high throughput assays of hydrolases using Brazilian termites as the screened organisms. Conclusions Important differences between the use of the natural substrate and commercial cellulase substrates, such as carboxymethylcellulose or crystalline cellulose, were observed. This suggests that wood feeding termites, in contrast to litter feeding termites, might not be the best source for enzymes that degrade sugarcane biomass.
Liu, Chong; Wang, Lei; Li, Jingmin; Ding, Xiping; Chunyu, Li; Xu, Zheng; Wang, Qi
A multilayer polydimethylsiloxane microdevice for cell-based high-throughput drug screening is described in this paper. This established microdevice was based on a modularization method and it integrated a drug/medium concentration gradient generator (CGG), pneumatic microvalves and a cell culture microchamber array. The CGG was able to generate five steps of linear concentrations with the same outlet flow rate. The medium/drug flowed through CGG and then into the pear-shaped cell culture microchambers vertically. This vertical perfusion mode was used to reduce the impact of the shear stress on the physiology of cells induced by the fluid flow in the microchambers. Pear-shaped microchambers with two arrays of miropillars at each outlet were adopted in this microdevice, which were beneficial to cell distribution. The chemotherapeutics Cisplatin (DDP)-induced Cisplatin-resistant cell line A549/DDP apoptotic experiments were performed well on this platform. The results showed that this novel microdevice could not only provide well-defined and stable conditions for cell culture, but was also useful for cell-based high-throughput drug screening with less reagents and time consumption. (paper)
Background Mosquito transgenesis offers new promises for the genetic control of vector-borne infectious diseases such as malaria and dengue fever. Genetic control strategies require the release of large number of male mosquitoes into field populations, whether they are based on the use of sterile males (sterile insect technique, SIT) or on introducing genetic traits conferring refractoriness to disease transmission (population replacement). However, the current absence of high-throughput techniques for sorting different mosquito populations impairs the application of these control measures. Methods A method was developed to generate large mosquito populations of the desired sex and genotype. This method combines flow cytometry and the use of Anopheles gambiae transgenic lines that differentially express fluorescent markers in males and females. Results Fluorescence-assisted sorting allowed single-step isolation of homozygous transgenic mosquitoes from a mixed population. This method was also used to select wild-type males only with high efficiency and accuracy, a highly desirable tool for genetic control strategies where the release of transgenic individuals may be problematic. Importantly, sorted males showed normal mating ability compared to their unsorted brothers. Conclusions The developed method will greatly facilitate both laboratory studies of mosquito vectorial capacity requiring high-throughput approaches and future field interventions in the fight against infectious disease vectors. PMID:22929810
Full Text Available Abstract Background Mosquito transgenesis offers new promises for the genetic control of vector-borne infectious diseases such as malaria and dengue fever. Genetic control strategies require the release of large number of male mosquitoes into field populations, whether they are based on the use of sterile males (sterile insect technique, SIT or on introducing genetic traits conferring refractoriness to disease transmission (population replacement. However, the current absence of high-throughput techniques for sorting different mosquito populations impairs the application of these control measures. Methods A method was developed to generate large mosquito populations of the desired sex and genotype. This method combines flow cytometry and the use of Anopheles gambiae transgenic lines that differentially express fluorescent markers in males and females. Results Fluorescence-assisted sorting allowed single-step isolation of homozygous transgenic mosquitoes from a mixed population. This method was also used to select wild-type males only with high efficiency and accuracy, a highly desirable tool for genetic control strategies where the release of transgenic individuals may be problematic. Importantly, sorted males showed normal mating ability compared to their unsorted brothers. Conclusions The developed method will greatly facilitate both laboratory studies of mosquito vectorial capacity requiring high-throughput approaches and future field interventions in the fight against infectious disease vectors.
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, Chihye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I.; Lee, Hoonkyung
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10-3 bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc- or V-porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials.
Kretz, Colin A; Tomberg, Kärt; Van Esbroeck, Alexander; Yee, Andrew; Ginsburg, David
We have combined random 6 amino acid substrate phage display with high throughput sequencing to comprehensively define the active site specificity of the serine protease thrombin and the metalloprotease ADAMTS13. The substrate motif for thrombin was determined by >6,700 cleaved peptides, and was highly concordant with previous studies. In contrast, ADAMTS13 cleaved only 96 peptides (out of >10 7 sequences), with no apparent consensus motif. However, when the hexapeptide library was substituted into the P3-P3' interval of VWF73, an exosite-engaging substrate of ADAMTS13, 1670 unique peptides were cleaved. ADAMTS13 exhibited a general preference for aliphatic amino acids throughout the P3-P3' interval, except at P2 where Arg was tolerated. The cleaved peptides assembled into a motif dominated by P3 Leu, and bulky aliphatic residues at P1 and P1'. Overall, the P3-P2' amino acid sequence of von Willebrand Factor appears optimally evolved for ADAMTS13 recognition. These data confirm the critical role of exosite engagement for substrates to gain access to the active site of ADAMTS13, and define the substrate recognition motif for ADAMTS13. Combining substrate phage display with high throughput sequencing is a powerful approach for comprehensively defining the active site specificity of proteases.
Chhabra, S.R.; Butland, G.; Elias, D.; Chandonia, J.-M.; Fok, V.; Juba, T.; Gorur, A.; Allen, S.; Leung, C.-M.; Keller, K.; Reveco, S.; Zane, G.; Semkiw, E.; Prathapam, R.; Gold, B.; Singer, M.; Ouellet, M.; Sazakal, E.; Jorgens, D.; Price, M.; Witkowska, E.; Beller, H.; Hazen, T.C.; Biggin, M.; Auer, M.; Wall, J.; Keasling, J.
The ability to conduct advanced functional genomic studies of the thousands of sequenced bacteria has been hampered by the lack of available tools for making high- throughput chromosomal manipulations in a systematic manner that can be applied across diverse species. In this work, we highlight the use of synthetic biological tools to assemble custom suicide vectors with reusable and interchangeable DNA “parts” to facilitate chromosomal modification at designated loci. These constructs enable an array of downstream applications including gene replacement and creation of gene fusions with affinity purification or localization tags. We employed this approach to engineer chromosomal modifications in a bacterium that has previously proven difficult to manipulate genetically, Desulfovibrio vulgaris Hildenborough, to generate a library of over 700 strains. Furthermore, we demonstrate how these modifications can be used for examining metabolic pathways, protein-protein interactions, and protein localization. The ubiquity of suicide constructs in gene replacement throughout biology suggests that this approach can be applied to engineer a broad range of species for a diverse array of systems biological applications and is amenable to high-throughput implementation.
Kushwaha, Garima; Srivastava, Gyan Prakash; Xu, Dong
Highly specific and efficient primer and probe design has been a major hurdle in many high-throughput techniques. Successful implementation of any PCR or probe hybridization technique depends on the quality of primers and probes used in terms of their specificity and cross-hybridization. Here we describe PRIMEGENSw3, a set of web-based utilities for high-throughput primer and probe design. These utilities allow users to select genomic regions and to design primer/probe for selected regions in an interactive, user-friendly, and automatic fashion. The system runs the PRIMEGENS algorithm in the back-end on the high-performance server with the stored genomic database or user-provided custom database for cross-hybridization check. Cross-hybridization is checked not only using BLAST but also by checking mismatch positions and energy calculation of potential hybridization hits. The results can be visualized online and also can be downloaded. The average success rate of primer design using PRIMEGENSw3 is ~90 %. The web server also supports primer design for methylated sequences, which is used in epigenetic studies. Stand-alone version of the software is also available for download at the website.
Sukumar, Nagamani; Das, Sourav
High throughput in silico methods have offered the tantalizing potential to drastically accelerate the drug discovery process. Yet despite significant efforts expended by academia, national labs and industry over the years, many of these methods have not lived up to their initial promise of reducing the time and costs associated with the drug discovery enterprise, a process that can typically take over a decade and cost hundreds of millions of dollars from conception to final approval and marketing of a drug. Nevertheless structure-based modeling has become a mainstay of computational biology and medicinal chemistry, helping to leverage our knowledge of the biological target and the chemistry of protein-ligand interactions. While ligand-based methods utilize the chemistry of molecules that are known to bind to the biological target, structure-based drug design methods rely on knowledge of the three-dimensional structure of the target, as obtained through crystallographic, spectroscopic or bioinformatics techniques. Here we review recent developments in the methodology and applications of structure-based and ligand-based methods and target-based chemogenomics in Virtual High Throughput Screening (VHTS), highlighting some case studies of recent applications, as well as current research in further development of these methods. The limitations of these approaches will also be discussed, to give the reader an indication of what might be expected in years to come.
Taggart, David J.; Camerlengo, Terry L.; Harrison, Jason K.; Sherrer, Shanen M.; Kshetry, Ajay K.; Taylor, John-Stephen; Huang, Kun; Suo, Zucai
Cellular genomes are constantly damaged by endogenous and exogenous agents that covalently and structurally modify DNA to produce DNA lesions. Although most lesions are mended by various DNA repair pathways in vivo, a significant number of damage sites persist during genomic replication. Our understanding of the mutagenic outcomes derived from these unrepaired DNA lesions has been hindered by the low throughput of existing sequencing methods. Therefore, we have developed a cost-effective high-throughput short oligonucleotide sequencing assay that uses next-generation DNA sequencing technology for the assessment of the mutagenic profiles of translesion DNA synthesis catalyzed by any error-prone DNA polymerase. The vast amount of sequencing data produced were aligned and quantified by using our novel software. As an example, the high-throughput short oligonucleotide sequencing assay was used to analyze the types and frequencies of mutations upstream, downstream and at a site-specifically placed cis–syn thymidine–thymidine dimer generated individually by three lesion-bypass human Y-family DNA polymerases. PMID:23470999
Adissu, Hibret A; Estabel, Jeanne; Sunter, David; Tuck, Elizabeth; Hooks, Yvette; Carragher, Damian M; Clarke, Kay; Karp, Natasha A; Newbigging, Susan; Jones, Nora; Morikawa, Lily; White, Jacqueline K; McKerlie, Colin
The Mouse Genetics Project (MGP) at the Wellcome Trust Sanger Institute aims to generate and phenotype over 800 genetically modified mouse lines over the next 5 years to gain a better understanding of mammalian gene function and provide an invaluable resource to the scientific community for follow-up studies. Phenotyping includes the generation of a standardized biobank of paraffin-embedded tissues for each mouse line, but histopathology is not routinely performed. In collaboration with the Pathology Core of the Centre for Modeling Human Disease (CMHD) we report the utility of histopathology in a high-throughput primary phenotyping screen. Histopathology was assessed in an unbiased selection of 50 mouse lines with (n=30) or without (n=20) clinical phenotypes detected by the standard MGP primary phenotyping screen. Our findings revealed that histopathology added correlating morphological data in 19 of 30 lines (63.3%) in which the primary screen detected a phenotype. In addition, seven of the 50 lines (14%) presented significant histopathology findings that were not associated with or predicted by the standard primary screen. Three of these seven lines had no clinical phenotype detected by the standard primary screen. Incidental and strain-associated background lesions were present in all mutant lines with good concordance to wild-type controls. These findings demonstrate the complementary and unique contribution of histopathology to high-throughput primary phenotyping of mutant mice.
Hirayama, Yusuke; Iguchi, Ryo; Miao, Xue-Fei; Hono, Kazuhiro; Uchida, Ken-ichi
We demonstrate a high-throughput direct measurement method for the magnetocaloric effect (MCE) by means of a lock-in thermography (LIT) technique. This method enables systematic measurements of the magnetic-field and operation-frequency dependences of the temperature change induced by the MCE. This is accomplished in a shorter time compared to conventional adiabatic temperature measurement methods. The direct measurement based on LIT is free from any possible miscalculations and errors arising from indirect measurements using thermodynamic relations. Importantly, the LIT technique makes simultaneous MCE measurements of multiple materials possible without increasing the measurement time, realizing high-throughput investigations of the MCE. By applying this method to Gd, we obtain the MCE-induced temperature change of 1.84 ± 0.11 K under a modulation field of 1.0 T and modulation frequency of 0.5 Hz at a temperature of 300.5 ± 0.5 K, offering evidence that the LIT method gives quantitative results.
Sinclair, Ian; Stearns, Rick; Pringle, Steven; Wingfield, Jonathan; Datwani, Sammy; Hall, Eric; Ghislain, Luke; Majlof, Lars; Bachman, Martin
High-throughput, direct measurement of substrate-to-product conversion by label-free detection, without the need for engineered substrates or secondary assays, could be considered the "holy grail" of drug discovery screening. Mass spectrometry (MS) has the potential to be part of this ultimate screening solution, but is constrained by the limitations of existing MS sample introduction modes that cannot meet the throughput requirements of high-throughput screening (HTS). Here we report data from a prototype system (Echo-MS) that uses acoustic droplet ejection (ADE) to transfer femtoliter-scale droplets in a rapid, precise, and accurate fashion directly into the MS. The acoustic source can load samples into the MS from a microtiter plate at a rate of up to three samples per second. The resulting MS signal displays a very sharp attack profile and ions are detected within 50 ms of activation of the acoustic transducer. Additionally, we show that the system is capable of generating multiply charged ion species from simple peptides and large proteins. The combination of high speed and low sample volume has significant potential within not only drug discovery, but also other areas of the industry. © 2015 Society for Laboratory Automation and Screening.
Full Text Available Metal organic frameworks (MOFs have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure–performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.
Lai, Queenie T K; Lee, Kelvin C M; Tang, Anson H L; Wong, Kenneth K Y; So, Hayden K H; Tsia, Kevin K
Time-stretch imaging has been regarded as an attractive technique for high-throughput imaging flow cytometry primarily owing to its real-time, continuous ultrafast operation. Nevertheless, two key challenges remain: (1) sufficiently high time-stretch image resolution and contrast is needed for visualizing sub-cellular complexity of single cells, and (2) the ability to unravel the heterogeneity and complexity of the highly diverse population of cells - a central problem of single-cell analysis in life sciences - is required. We here demonstrate an optofluidic time-stretch imaging flow cytometer that enables these two features, in the context of high-throughput multi-class (up to 14 classes) phytoplantkton screening and classification. Based on the comprehensive feature extraction and selection procedures, we show that the intracellular texture/morphology, which is revealed by high-resolution time-stretch imaging, plays a critical role of improving the accuracy of phytoplankton classification, as high as 94.7%, based on multi-class support vector machine (SVM). We also demonstrate that high-resolution time-stretch images, which allows exploitation of various feature domains, e.g. Fourier space, enables further sub-population identification - paving the way toward deeper learning and classification based on large-scale single-cell images. Not only applicable to biomedical diagnostic, this work is anticipated to find immediate applications in marine and biofuel research.
Guo, Feng; Lapsley, Michael Ian; Nawaz, Ahmad Ahsan; Zhao, Yanhui; Lin, Sz-Chin Steven; Chen, Yuchao; Yang, Shikuan; Zhao, Xing-Zhong; Huang, Tony Jun
Analysis of chemical or biomolecular contents in a tiny amount of specimen presents a significant challenge in many biochemical studies and diagnostic applications. In this work, we present a single-layer, optofluidic device for real-time, high-throughput, quantitative analysis of droplet contents. Our device integrates an optical fiber-based, on-chip detection unit with a droplet-based microfluidic unit. It can quantitatively analyze the contents of individual droplets in real-time. It also achieves a detection throughput of 2000 droplets per second, a detection limit of 20 nM, and an excellent reproducibility in its detection results. In a proof-of-concept study, we demonstrate that our device can be used to perform detection of DNA and its mutations by monitoring the fluorescent signal changes of the target DNA/molecular beacon complex in single droplets. Our approach can be immediately extended to a real-time, high-throughput detection of other biomolecules (such as proteins and viruses) in droplets. With its advantages in throughput, functionality, cost, size, and reliability, the droplet-based optofluidic device presented here can be a valuable tool for many medical diagnostic applications.
Nattkemper, T W; Ritter, H J; Schubert, W
A neural cell detection system (NCDS) for the automatic quantitation of fluorescent lymphocytes in tissue sections is presented in this paper. The system acquires visual knowledge from a set of training cell-image patches selected by a user. The trained system evaluates an image in 2 min calculating: the number, the positions, and the phenotypes of the fluorescent cells. For validation, the NCDS learning performance was tested by cross validation on digitized images of tissue sections obtained from inherently different types of tissue: diagnostic tissue sections across the human tonsil and across an inflammatory lymphocyte infiltrate of the human skeletal muscle. The NCDS detection results were compared with detection results from biomedical experts and were visually evaluated by our most experienced biomedical expert. Although the micrographs were noisy and the fluorescent cells varied in shape and size, the NCDS detected a minimum of 95% of the cells. In contrast, the cellular counts based on visual cell recognition of the experts were inconsistent and largely unreproducible for approximately 80% of the lymphocytes present in a visual field. The data indicate that the NCDS is rapid and delivers highly reproducible results and, therefore, enables high-throughput topological screening of lymphocytes in many types of tissue, e.g., as obtained by routine diagnostic biopsy procedures. High-throughput screening with the NCDS provides the platform for the quantitative analysis of the interrelationship between tissue environment, cellular phenotype, and cellular topology.
Maddux, Nathaniel R.; Rosen, Ilan T.; Hu, Lei; Olsen, Christopher M.; Volkin, David B.; Middaugh, C. Russell
The Empirical Phase Diagram (EPD) technique is a vector-based multidimensional analysis method for summarizing large data sets from a variety of biophysical techniques. It can be used to provide comprehensive preformulation characterization of a macromolecule’s higher-order structural integrity and conformational stability. In its most common mode, it represents a type of stimulus-response diagram using environmental variables such as temperature, pH, and ionic strength as the stimulus, with alterations in macromolecular structure being the response. Until now EPD analysis has not been available in a high throughput mode because of the large number of experimental techniques and environmental stressor/stabilizer variables typically employed. A new instrument has been developed that combines circular dichroism, UV-absorbance, fluorescence spectroscopy and light scattering in a single unit with a 6-position temperature controlled cuvette turret. Using this multifunctional instrument and a new software system we have generated EPDs for four model proteins. Results confirm the reproducibility of the apparent phase boundaries and protein behavior within the boundaries. This new approach permits two EPDs to be generated per day using only 0.5 mg of protein per EPD. Thus, the new methodology generates reproducible EPDs in high-throughput mode, and represents the next step in making such determinations more routine. PMID:22447621
Fabrigar, Danica Joy; Hubbart, Christina; Miles, Alistair; Rockett, Kirk
Recent developments in genotyping technologies coupled with the growing desire to characterize genome variation in Anopheles populations open the opportunity to develop more effective genotyping strategies for high-throughput screening. A major bottleneck of this goal is nucleic acid extraction. Here, we examined the feasibility of using intact portions of a mosquito's leg as sources of template DNA for whole-genome amplification (WGA) by primer-extension preamplification. We used the Agena Biosciences MassARRAY(®) platform (formerly Sequenom) to genotype 78 SNPs for 265 WGA leg samples. We performed nucleic acid extraction on 36 mosquito carcasses and compared the genotype call concordance with their corresponding legs and observed full concordance. Using three legs instead of one improved genotyping success rates (96% vs. 89%, respectively), although this difference was not significant. We provide a proof of concept that WGA reactions can be performed directly on mosquito legs, thereby eliminating the need to extract nucleic acid. This approach is straightforward and sensitive and allows both species determination and genotyping of Anopheles mosquitoes to be performed in a high-throughput manner. Our protocol also leaves the mosquito body intact facilitating other experimental analysis to be undertaken on the same sample. Based on our findings, this method would also be suitable for use with other insect species. © 2015 John Wiley & Sons Ltd.
Full Text Available Changes in protein glycosylation are related to different diseases and have a potential as diagnostic and prognostic disease biomarkers. Transferrin (Tf glycosylation changes are common marker for congenital disorders of glycosylation. However, biological interindividual variability of Tf N-glycosylation and genes involved in glycosylation regulation are not known. Therefore, high-throughput Tf isolation method and large scale glycosylation studies are needed in order to address these questions. Due to their unique chromatographic properties, the use of chromatographic monoliths enables very fast analysis cycle, thus significantly increasing sample preparation throughput. Here, we are describing characterization of novel immunoaffinity-based monolithic columns in a 96-well plate format for specific high-throughput purification of human Tf from blood plasma. We optimized the isolation and glycan preparation procedure for subsequent ultra performance liquid chromatography (UPLC analysis of Tf N-glycosylation and managed to increase the sensitivity for approximately three times compared to initial experimental conditions, with very good reproducibility. This work is licensed under a Creative Commons Attribution 4.0 International License.
Rahbari, Raheleh; Badge, Richard M
With the advent of new generations of high-throughput sequencing technologies, the catalog of human genome variants created by retrotransposon activity is expanding rapidly. However, despite these advances in describing L1 diversity and the fact that L1 must retrotranspose in the germline or prior to germline partitioning to be evolutionarily successful, direct assessment of de novo L1 retrotransposition in the germline or early embryogenesis has not been achieved for endogenous L1 elements. A direct study of de novo L1 retrotransposition into susceptible loci within sperm DNA (Freeman et al., Hum Mutat 32(8):978-988, 2011) suggested that the rate of L1 retrotransposition in the germline is much lower than previously estimated (ATLAS L1 display technique (Badge et al., Am J Hum Genet 72(4):823-838, 2003) to investigate de novo L1 retrotransposition in human genomes. In this chapter, we describe how we combined a high-coverage ATLAS variant with high-throughput sequencing, achieving 11-25× sequence depth per single amplicon, to study L1 retrotransposition in whole genome amplified (WGA) DNAs.
Maddux, Nathaniel R; Rosen, Ilan T; Hu, Lei; Olsen, Christopher M; Volkin, David B; Middaugh, C Russell
The empirical phase diagram (EPD) technique is a vector-based multidimensional analysis method for summarizing large data sets from a variety of biophysical techniques. It can be used to provide comprehensive preformulation characterization of a macromolecule's higher-order structural integrity and conformational stability. In its most common mode, it represents a type of stimulus-response diagram using environmental variables such as temperature, pH, and ionic strength as the stimulus, with alterations in macromolecular structure being the response. Until now, EPD analysis has not been available in a high-throughput mode because of the large number of experimental techniques and environmental stressor/stabilizer variables typically employed. A new instrument has been developed that combines circular dichroism, ultraviolet absorbance, fluorescence spectroscopy, and light scattering in a single unit with a six-position, temperature-controlled cuvette turret. Using this multifunctional instrument and a new software system, we have generated EPDs for four model proteins. Results confirm the reproducibility of the apparent phase boundaries and protein behavior within the boundaries. This new approach permits two EPDs to be generated per day using only 0.5 mg of protein per EPD. Thus, the new methodology generates reproducible EPDs in high-throughput mode and represents the next step in making such determinations more routine. Copyright © 2012 Wiley Periodicals, Inc.
Virdee, Jasmeet K; Saro, Gabriella; Fouillet, Antoine; Findlay, Jeremy; Ferreira, Filipa; Eversden, Sarah; O'Neill, Michael J; Wolak, Joanna; Ursu, Daniel
Loss of synapses or alteration of synaptic activity is associated with cognitive impairment observed in a number of psychiatric and neurological disorders, such as schizophrenia and Alzheimer's disease. Therefore successful development of in vitro methods that can investigate synaptic function in a high-throughput format could be highly impactful for neuroscience drug discovery. We present here the development, characterisation and validation of a novel high-throughput in vitro model for assessing neuronal function and synaptic transmission in primary rodent neurons. The novelty of our approach resides in the combination of the electrical field stimulation (EFS) with data acquisition in spatially separated areas of an interconnected neuronal network. We integrated our methodology with state of the art drug discovery instrumentation (FLIPR Tetra) and used selective tool compounds to perform a systematic pharmacological validation of the model. We investigated pharmacological modulators targeting pre- and post-synaptic receptors (AMPA, NMDA, GABA-A, mGluR2/3 receptors and Nav, Cav voltage-gated ion channels) and demonstrated the ability of our model to discriminate and measure synaptic transmission in cultured neuronal networks. Application of the model described here as an unbiased phenotypic screening approach will help with our long term goals of discovering novel therapeutic strategies for treating neurological disorders.
Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.
Full Text Available Label-free and real-time detection technologies can dramatically reduce the time and cost of pharmaceutical testing and development. However, to reach their full promise, these technologies need to be adaptable to high-throughput automation. To demonstrate the potential of single-walled carbon nanotube field-effect transistors (SWCNT-FETs for high-throughput peptide-based assays, we have designed circuits arranged in an 8 × 12 (96-well format that are accessible to standard multichannel pipettors. We performed epitope mapping of two HIV-1 gp160 antibodies using an overlapping gp160 15-mer peptide library coated onto nonfunctionalized SWCNTs. The 15-mer peptides did not require a linker to adhere to the non-functionalized SWCNTs, and binding data was obtained in real time for all 96 circuits. Despite some sequence differences in the HIV strains used to generate these antibodies and the overlapping peptide library, respectively, our results using these antibodies are in good agreement with known data, indicating that peptides immobilized onto SWCNT are accessible and that linear epitope mapping can be performed in minutes using SWCNT-FET.
Kim, Seunggyu; Lee, Seokhun; Jeon, Jessie S.
To determine the most effective antimicrobial treatments of infectious pathogen, high-throughput antibiotic susceptibility test (AST) is critically required. However, the conventional AST requires at least 16 hours to reach the minimum observable population. Therefore, we developed a microfluidic system that allows maintenance of linear antibiotic concentration and measurement of local bacterial density. Based on the Stokes-Einstein equation, the flow rate in the microchannel was optimized so that linearization was achieved within 10 minutes, taking into account the diffusion coefficient of each antibiotic in the agar gel. As a result, the minimum inhibitory concentration (MIC) of each antibiotic against P. aeruginosa could be immediately determined 6 hours after treatment of the linear antibiotic concentration. In conclusion, our system proved the efficacy of a high-throughput AST platform through MIC comparison with Clinical and Laboratory Standards Institute (CLSI) range of antibiotics. This work was supported by the Climate Change Research Hub (Grant No. N11170060) of the KAIST and by the Brain Korea 21 Plus project.
Majeed, Hassaan; Kandel, Mikhail E.; Bhaduri, Basanta; Han, Kevin; Luo, Zelun; Tangella, Krishnarao; Popescu, Gabriel
While automated blood cell counters have made great progress in detecting abnormalities in blood, the lack of specificity for a particular disease, limited information on single cell morphology and intrinsic uncertainly due to high throughput in these instruments often necessitates detailed inspection in the form of a peripheral blood smear. Such tests are relatively time consuming and frequently rely on medical professionals tally counting specific cell types. These assays rely on the contrast generated by chemical stains, with the signal intensity strongly related to staining and preparation techniques, frustrating machine learning algorithms that require consistent quantities to denote the features in question. Instead we opt to use quantitative phase imaging, understanding that the resulting image is entirely due to the structure (intrinsic contrast) rather than the complex interplay of stain and sample. We present here our first steps to automate peripheral blood smear scanning, in particular a method to generate the quantitative phase image of an entire blood smear at high throughput using white light diffraction phase microscopy (wDPM), a single shot and common path interferometric imaging technique.
Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver
In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.
Tian, Geng; Tang, Fangrong; Yang, Chunhua; Zhang, Wenfeng; Bergquist, Jonas; Wang, Bin; Mi, Jia; Zhang, Jiandi
Lacking access to an affordable method of high throughput immunoblot analysis for daily use remains a big challenge for scientists worldwide. We proposed here Quantitative Dot Blot analysis (QDB) to meet this demand. With the defined linear range, QDB analysis fundamentally transforms traditional immunoblot method into a true quantitative assay. Its convenience in analyzing large number of samples also enables bench scientists to examine protein expression levels from multiple parameters. In addition, the small amount of sample lysates needed for analysis means significant saving in research sources and efforts. This method was evaluated at both cellular and tissue levels with unexpected observations otherwise would be hard to achieve using conventional immunoblot methods like Western blot analysis. Using QDB technique, we were able to observed an age-dependent significant alteration of CAPG protein expression level in TRAMP mice. We believe that the adoption of QDB analysis would have immediate impact on biological and biomedical research to provide much needed high-throughput information at protein level in this "Big Data" era.
Full Text Available Adiponectin, the adipose-derived hormone, plays an important role in the suppression of metabolic disorders that can result in type 2 diabetes, obesity, and atherosclerosis. It has been shown that up-regulation of adiponectin or adiponectin receptor has a number of therapeutic benefits. Given that it is hard to convert the full size adiponectin protein into a viable drug, adiponectin receptor agonists could be designed or identified using high-throughput screening. Here, we report on the development of a two-step screening process to identify adiponectin agonists. First step, we developed a high throughput screening assay based on fluorescence polarization to identify adiponectin ligands. The fluorescence polarization assay reported here could be adapted to screening against larger small molecular compound libraries. A natural product library containing 10,000 compounds was screened and 9 hits were selected for validation. These compounds have been taken for the second-step in vitro tests to confirm their agonistic activity. The most active adiponectin receptor 1 agonists are matairesinol, arctiin, (--arctigenin and gramine. The most active adiponectin receptor 2 agonists are parthenolide, taxifoliol, deoxyschizandrin, and syringin. These compounds may be useful drug candidates for hypoadiponectin related diseases.
Yus, Eva; Yang, Jae-Seong; Sogues, Adrià; Serrano, Luis
Quantitative analysis of the sequence determinants of transcription and translation regulation is relevant for systems and synthetic biology. To identify these determinants, researchers have developed different methods of screening random libraries using fluorescent reporters or antibiotic resistance genes. Here, we have implemented a generic approach called ELM-seq (expression level monitoring by DNA methylation) that overcomes the technical limitations of such classic reporters. ELM-seq uses DamID (Escherichia coli DNA adenine methylase as a reporter coupled with methylation-sensitive restriction enzyme digestion and high-throughput sequencing) to enable in vivo quantitative analyses of upstream regulatory sequences. Using the genome-reduced bacterium Mycoplasma pneumoniae, we show that ELM-seq has a large dynamic range and causes minimal toxicity. We use ELM-seq to determine key sequences (known and putatively novel) of promoter and untranslated regions that influence transcription and translation efficiency. Applying ELM-seq to other organisms will help us to further understand gene expression and guide synthetic biology.Quantitative analysis of how DNA sequence determines transcription and translation regulation is of interest to systems and synthetic biologists. Here the authors present ELM-seq, which uses Dam activity as reporter for high-throughput analysis of promoter and 5'-UTR regions.
Pang, Hei-Leung; Kwok, Nga-Yan; Chan, Pak-Ho; Yeung, Chi-Hung; Lo, Waihung; Wong, Kwok-Yin
The use of the conventional 5-day biochemical oxygen demand (BOD5) method in BOD determination is greatly hampered by its time-consuming sampling procedure and its technical difficulty in the handling of a large pool of wastewater samples. Thus, it is highly desirable to develop a fast and high-throughput biosensor for BOD measurements. This paper describes the construction of a microplate-based biosensor consisting of an organically modified silica (ORMOSIL) oxygen sensing film for high-throughput determination of BOD in wastewater. The ORMOSIL oxygen sensing film was prepared by reacting tetramethoxysilane with dimethyldimethoxysilane in the presence of the oxygen-sensitive dye tris(4,7-diphenyl-1,10-phenanthroline)ruthenium-(II) chloride. The silica composite formed a homogeneous, crack-free oxygen sensing film on polystyrene microtiter plates with high stability, and the embedded ruthenium dye interacted with the dissolved oxygen in wastewater according to the Stern-Volmer relation. The bacterium Stenotrophomonas maltophilia was loaded into the ORMOSIL/ PVA composite (deposited on the top of the oxygen sensing film) and used to metabolize the organic compounds in wastewater. This BOD biosensor was found to be able to determine the BOD values of wastewater samples within 20 min by monitoring the dissolved oxygen concentrations. Moreover, the BOD values determined by the BOD biosensor were in good agreement with those obtained by the conventional BOD5 method.
Lesley Joan Collins
Full Text Available ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, snoRNAs and long ncRNAs on a genomic scale making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases.
Kebschull, Justus M.; Zador, Anthony M.
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
Lam, Kathy N; Hall, Michael W; Engel, Katja; Vey, Gregory; Cheng, Jiujun; Neufeld, Josh D; Charles, Trevor C
High-throughput sequencing methods have been instrumental in the growing field of metagenomics, with technological improvements enabling greater throughput at decreased costs. Nonetheless, the economy of high-throughput sequencing cannot be fully leveraged in the subdiscipline of functional metagenomics. In this area of research, environmental DNA is typically cloned to generate large-insert libraries from which individual clones are isolated, based on specific activities of interest. Sequence data are required for complete characterization of such clones, but the sequencing of a large set of clones requires individual barcode-based sample preparation; this can become costly, as the cost of clone barcoding scales linearly with the number of clones processed, and thus sequencing a large number of metagenomic clones often remains cost-prohibitive. We investigated a hybrid Sanger/Illumina pooled sequencing strategy that omits barcoding altogether, and we evaluated this strategy by comparing the pooled sequencing results to reference sequence data obtained from traditional barcode-based sequencing of the same set of clones. Using identity and coverage metrics in our evaluation, we show that pooled sequencing can generate high-quality sequence data, without producing problematic chimeras. Though caveats of a pooled strategy exist and further optimization of the method is required to improve recovery of complete clone sequences and to avoid circumstances that generate unrecoverable clone sequences, our results demonstrate that pooled sequencing represents an effective and low-cost alternative for sequencing large sets of metagenomic clones.
Andrew Paul Hutchins
Full Text Available Genomic datasets and the tools to analyze them have proliferated at an astonishing rate. However, such tools are often poorly integrated with each other: each program typically produces its own custom output in a variety of non-standard file formats. Here we present glbase, a framework that uses a flexible set of descriptors that can quickly parse non-binary data files. glbase includes many functions to intersect two lists of data, including operations on genomic interval data and support for the efficient random access to huge genomic data files. Many glbase functions can produce graphical outputs, including scatter plots, heatmaps, boxplots and other common analytical displays of high-throughput data such as RNA-seq, ChIP-seq and microarray expression data. glbase is designed to rapidly bring biological data into a Python-based analytical environment to facilitate analysis and data processing. In summary, glbase is a flexible and multifunctional toolkit that allows the combination and analysis of high-throughput data (especially next-generation sequencing and genome-wide data, and which has been instrumental in the analysis of complex data sets. glbase is freely available at http://bitbucket.org/oaxiom/glbase/.
An, Qun-Xing; Li, Cui-Ying; Xu, Li-Juan; Zhang, Xian-Qing; Bai, Yan-Jun; Shao, Zhong-Jun; Zhang, Wei
Comprehensive and accurate detection of human platelet antigens (HPAs) plays a significant role in diagnosis and prevention of the platelet (PLT) alloimmune syndromes and ensuring clinical safety of patients undergoing PLT transfusion. The majority of the available methods are incapable of performing high-throughput simultaneous detection of HPA-1 to -16, and the accuracy of many methods needs to be further enhanced. We have developed a new HPA-genotyping method for simultaneous detection of HPA-1 to -16 based on suspension array technology. A total of 216 samples from Chinese Han donors in Xi'an were genotyped using the developed method, and all the samples again were genotyped using polymerase chain reaction (PCR) sequence-based typing (PCR-SBT), which is considered the gold standard. All 216 samples were successfully genotyped for HPA-1 to -16 using both our method and PCR-SBT. Results showed that the genotype and allele frequencies obtained using our method were fully consistent with those obtained using PCR-SBT. Our method provides accurate, high-throughput, and simultaneous genotyping of HPA-1 to -16 and will serve as the foundation for large-scale clinical genotyping of HPAs and for the establishment of an HPA-typed PLT donor registry. © 2013 American Association of Blood Banks.
Wan, Ying-Wooi; Allen, Genevera I; Baker, Yulia; Yang, Eunho; Ravikumar, Pradeep; Anderson, Matthew; Liu, Zhandong
Technological advances in medicine have led to a rapid proliferation of high-throughput "omics" data. Tools to mine this data and discover disrupted disease networks are needed as they hold the key to understanding complicated interactions between genes, mutations and aberrations, and epi-genetic markers. We developed an R software package, XMRF, that can be used to fit Markov Networks to various types of high-throughput genomics data. Encoding the models and estimation techniques of the recently proposed exponential family Markov Random Fields (Yang et al., 2012), our software can be used to learn genetic networks from RNA-sequencing data (counts via Poisson graphical models), mutation and copy number variation data (categorical via Ising models), and methylation data (continuous via Gaussian graphical models). XMRF is the only tool that allows network structure learning using the native distribution of the data instead of the standard Gaussian. Moreover, the parallelization feature of the implemented algorithms computes the large-scale biological networks efficiently. XMRF is available from CRAN and Github ( https://github.com/zhandong/XMRF ).
Lee, Jonathan; Gulzar, Naveed; Scott, Jamie K.; Li, Paul C. H.
Immunoassays have become a standard in secretome analysis in clinical and research analysis. In this field there is a need for a high throughput method that uses low sample volumes. Microfluidics and nanofluidics have been developed for this purpose. Our lab has developed a nanofluidic bioarray (NBA) chip with the goal being a high throughput system that assays low sample volumes against multiple probes. A combination of horizontal and vertical channels are produced to create an array antigens on the surface of the NBA chip in one dimension that is probed by flowing in the other dimension antibodies from biological fluids. We have tested the NBA chip by immobilizing streptavidin and then biotinylated peptide to detect the presence of a mouse monoclonal antibody (MAb) that is specific for the peptide. Bound antibody is detected by an AlexaFluor 647 labeled goat (anti-mouse IgG) polyclonal antibody. Using the NBA chip, we have successfully detected peptide binding by small-volume (0.5 μl) samples containing 50 attomoles (100 pM) MAb.
Recently, several structural genomics centers have been established and a remarkable number of three-dimensional structures of soluble proteins have been solved. For membrane proteins, the number of structures solved has been significantly trailing those for their soluble counterparts, not least because over-expression and purification of membrane proteins is a much more arduous process. By using high throughput technologies, a large number of membrane protein targets can be screened simultaneously and a greater number of expression and purification conditions can be employed, leading to a higher probability of successfully determining the structure of membrane proteins. This unit describes the cloning, expression and screening of membrane proteins using high throughput methodologies developed in our laboratory. Basic Protocol 1 deals with the cloning of inserts into expression vectors by ligation-independent cloning. Basic Protocol 2 describes the expression and purification of the target proteins on a miniscale. Lastly, for the targets that express at the miniscale, basic protocols 3 and 4 outline the methods employed for the expression and purification of targets at the midi-scale, as well as a procedure for detergent screening and identification of detergent(s) in which the target protein is stable. PMID:24510647
Hou, Sichao; Huo, Ruiqing; Su, Ming
Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.
Hong, Hyundae; Benac, Jasenka; Riggsbee, Daniel; Koutsky, Keith
High throughput (HT) phenotyping of crops is essential to increase yield in environments deteriorated by climate change. The controlled environment of a greenhouse offers an ideal platform to study the genotype to phenotype linkages for crop screening. Advanced imaging technologies are used to study plants' responses to resource limitations such as water and nutrient deficiency. Advanced imaging technologies coupled with automation make HT phenotyping in the greenhouse not only feasible, but practical. Monsanto has a state of the art automated greenhouse (AGH) facility. Handling of the soil, pots water and nutrients are all completely automated. Images of the plants are acquired by multiple hyperspectral and broadband cameras. The hyperspectral cameras cover wavelengths from visible light through short wave infra-red (SWIR). Inhouse developed software analyzes the images to measure plant morphological and biochemical properties. We measure phenotypic metrics like plant area, height, and width as well as biomass. Hyperspectral imaging allows us to measure biochemcical metrics such as chlorophyll, anthocyanin, and foliar water content. The last 4 years of AGH operations on crops like corn, soybean, and cotton have demonstrated successful application of imaging and analysis technologies for high throughput plant phenotyping. Using HT phenotyping, scientists have been showing strong correlations to environmental conditions, such as water and nutrient deficits, as well as the ability to tease apart distinct differences in the genetic backgrounds of crops.
Microgel is a kind of biocompatible polymeric material, which has been widely used as micro-carriers in materials synthesis, drug delivery and cell biology applications. However, high-throughput generation of individual microgel for on-site analysis in a microdevice still remains a challenge. Here, we presented a simple and stable droplet microfluidic system to realize high-throughput generation and trapping of individual agarose microgels based on the synergetic effect of surface tension and hydrodynamic forces in microchannels and used it for 3-D cell culture in real-time. The established system was mainly composed of droplet generators with flow focusing T-junction and a series of array individual trap structures. The whole process including the independent agarose microgel formation, immobilization in trapping array and gelation in situ via temperature cooling could be realized on the integrated microdevice completely. The performance of this system was demonstrated by successfully encapsulating and culturing adenoid cystic carcinoma (ACCM) cells in the gelated agarose microgels. This established approach is simple, easy to operate, which can not only generate the micro-carriers with different components in parallel, but also monitor the cell behavior in 3D matrix in real-time. It can also be extended for applications in the area of material synthesis and tissue engineering. © 2013 Springer-Verlag Berlin Heidelberg.
Full Text Available Current cell-based assays for determining the functional properties of high-density lipoproteins (HDL have limitations. We report here the development of a new, robust fluorometric cell-free biochemical assay that measures HDL lipid peroxidation (HDLox based on the oxidation of the fluorochrome Amplex Red. HDLox correlated with previously validated cell-based (r = 0.47, p<0.001 and cell-free assays (r = 0.46, p<0.001. HDLox distinguished dysfunctional HDL in established animal models of atherosclerosis and Human Immunodeficiency Virus (HIV patients. Using an immunoaffinity method for capturing HDL, we demonstrate the utility of this novel assay for measuring HDLox in a high throughput format. Furthermore, HDLox correlated significantly with measures of cardiovascular diseases including carotid intima media thickness (r = 0.35, p<0.01 and subendocardial viability ratio (r = -0.21, p = 0.05 and physiological parameters such as metabolic and anthropometric parameters (p<0.05. In conclusion, we report the development of a new fluorometric method that offers a reproducible and rapid means for determining HDL function/quality that is suitable for high throughput implementation.
Urasaki, Yasuyo; Fiscus, Ronald R; Le, Thuc T
We describe an alternative approach to classifying fatty liver by profiling protein post-translational modifications (PTMs) with high-throughput capillary isoelectric focusing (cIEF) immunoassays. Four strains of mice were studied, with fatty livers induced by different causes, such as ageing, genetic mutation, acute drug usage, and high-fat diet. Nutrient-sensitive PTMs of a panel of 12 liver metabolic and signalling proteins were simultaneously evaluated with cIEF immunoassays, using nanograms of total cellular protein per assay. Changes to liver protein acetylation, phosphorylation, and O-N-acetylglucosamine glycosylation were quantified and compared between normal and diseased states. Fatty liver tissues could be distinguished from one another by distinctive protein PTM profiles. Fatty liver is currently classified by morphological assessment of lipid droplets, without identifying the underlying molecular causes. In contrast, high-throughput profiling of protein PTMs has the potential to provide molecular classification of fatty liver. Copyright © 2016 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Oldenburg Delene J
Full Text Available Abstract Background The amount of DNA in the chloroplasts of some plant species has been shown recently to decline dramatically during leaf development. A high-throughput method of DNA detection in chloroplasts is now needed in order to facilitate the further investigation of this process using large numbers of tissue samples. Results The DNA-binding fluorophores 4',6-diamidino-2-phenylindole (DAPI, SYBR Green I (SG, SYTO 42, and SYTO 45 were assessed for their utility in flow cytometric analysis of DNA in Arabidopsis chloroplasts. Fluorescence microscopy and real-time quantitative PCR (qPCR were used to validate flow cytometry data. We found neither DAPI nor SYTO 45 suitable for flow cytometric analysis of chloroplast DNA (cpDNA content, but did find changes in cpDNA content during development by flow cytometry using SG and SYTO 42. The latter dye provided more sensitive detection, and the results were similar to those from the fluorescence microscopic analysis. Differences in SYTO 42 fluorescence were found to correlate with differences in cpDNA content as determined by qPCR using three primer sets widely spaced across the chloroplast genome, suggesting that the whole genome undergoes copy number reduction during development, rather than selective reduction/degradation of subgenomic regions. Conclusion Flow cytometric analysis of chloroplasts stained with SYTO 42 is a high-throughput method suitable for determining changes in cpDNA content during development and for sorting chloroplasts on the basis of DNA content.
Full Text Available Secretion of extracellular vesicles is a general cellular activity that spans the range from simple unicellular organisms (e.g. archaea; Gram-positive and Gram-negative bacteria to complex multicellular ones, suggesting that this extracellular vesicle-mediated communication is evolutionarily conserved. Extracellular vesicles are spherical bilayered proteolipids with a mean diameter of 20–1,000 nm, which are known to contain various bioactive molecules including proteins, lipids, and nucleic acids. Here, we present EVpedia, which is an integrated database of high-throughput datasets from prokaryotic and eukaryotic extracellular vesicles. EVpedia provides high-throughput datasets of vesicular components (proteins, mRNAs, miRNAs, and lipids present on prokaryotic, non-mammalian eukaryotic, and mammalian extracellular vesicles. In addition, EVpedia also provides an array of tools, such as the search and browse of vesicular components, Gene Ontology enrichment analysis, network analysis of vesicular proteins and mRNAs, and a comparison of vesicular datasets by ortholog identification. Moreover, publications on extracellular vesicle studies are listed in the database. This free web-based database of EVpedia (http://evpedia.info might serve as a fundamental repository to stimulate the advancement of extracellular vesicle studies and to elucidate the novel functions of these complex extracellular organelles.
Smith, Eric R.; Begley, Darren W.; Anderson, Vanessa; Raymond, Amy C.; Haffner, Taryn E.; Robinson, John I.; Edwards, Thomas E.; Duncan, Natalie; Gerdts, Cory J.; Mixon, Mark B.; Nollert, Peter; Staker, Bart L.; Stewart, Lance J.
The Protein Maker instrument addresses a critical bottleneck in structural genomics by allowing automated purification and buffer testing of multiple protein targets in parallel with a single instrument. Here, the use of this instrument to (i) purify multiple influenza-virus proteins in parallel for crystallization trials and (ii) identify optimal lysis-buffer conditions prior to large-scale protein purification is described. The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications
Brod, Fábio Cristiano Angonesi; van Dijk, Jeroen P; Voorhuijzen, Marleen M; Dinon, Andréia Zilio; Guimarães, Luis Henrique S; Scholtens, Ingrid M J; Arisi, Ana Carolina Maisonnave; Kok, Esther J
The ever-increasing production of genetically modified crops generates a demand for high-throughput DNA-based methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the number of GMOs that is potentially present in an individual sample. The present work presents the results of an innovative approach in genetically modified crops analysis by DNA based methods, which is the use of a microfluidic dynamic array as a high throughput multi-detection system. In order to evaluate the system, six test samples with an increasing degree of complexity were prepared, preamplified and subsequently analysed in the Fluidigm system. Twenty-eight assays targeting different DNA elements, GM events and species-specific reference genes were used in the experiment. The large majority of the assays tested presented expected results. The power of low level detection was assessed and elements present at concentrations as low as 0.06 % were successfully detected. The approach proposed in this work presents the Fluidigm system as a suitable and promising platform for GMO multi-detection.
Liu, Chong; Wang, Lei; Xu, Zheng; Li, Jingmin; Ding, Xiping; Wang, Qi; Chunyu, Li
A multilayer polydimethylsiloxane microdevice for cell-based high-throughput drug screening is described in this paper. This established microdevice was based on a modularization method and it integrated a drug/medium concentration gradient generator (CGG), pneumatic microvalves and a cell culture microchamber array. The CGG was able to generate five steps of linear concentrations with the same outlet flow rate. The medium/drug flowed through CGG and then into the pear-shaped cell culture microchambers vertically. This vertical perfusion mode was used to reduce the impact of the shear stress on the physiology of cells induced by the fluid flow in the microchambers. Pear-shaped microchambers with two arrays of miropillars at each outlet were adopted in this microdevice, which were beneficial to cell distribution. The chemotherapeutics Cisplatin (DDP)-induced Cisplatin-resistant cell line A549/DDP apoptotic experiments were performed well on this platform. The results showed that this novel microdevice could not only provide well-defined and stable conditions for cell culture, but was also useful for cell-based high-throughput drug screening with less reagents and time consumption.
Jung, Sang-Kyu; Qu, Xiaolei; Aleman-Meza, Boanerges; Wang, Tianxiao; Riepe, Celeste; Liu, Zheng; Li, Qilin; Zhong, Weiwei
The booming nanotech industry has raised public concerns about the environmental health and safety impact of engineered nanomaterials (ENMs). High-throughput assays are needed to obtain toxicity data for the rapidly increasing number of ENMs. Here we present a suite of high-throughput methods to study nanotoxicity in intact animals using Caenorhabditis elegans as a model. At the population level, our system measures food consumption of thousands of animals to evaluate population fitness. At the organism level, our automated system analyzes hundreds of individual animals for body length, locomotion speed, and lifespan. To demonstrate the utility of our system, we applied this technology to test the toxicity of 20 nanomaterials under four concentrations. Only fullerene nanoparticles (nC60), fullerol, TiO2, and CeO2 showed little or no toxicity. Various degrees of toxicity were detected from different forms of carbon nanotubes, graphene, carbon black, Ag, and fumed SiO2 nanoparticles. Aminofullerene and UV irradiated nC60 also showed small but significant toxicity. We further investigated the effects of nanomaterial size, shape, surface chemistry, and exposure conditions on toxicity. Our data are publicly available at the open-access nanotoxicity database www.QuantWorm.org/nano. PMID:25611253
Farias-Hesson, Eveline; Erikson, Jonathan; Atkins, Alexander; Shen, Peidong; Davis, Ronald W; Scharfe, Curt; Pourmand, Nader
Next-generation sequencing platforms are powerful technologies, providing gigabases of genetic information in a single run. An important prerequisite for high-throughput DNA sequencing is the development of robust and cost-effective preprocessing protocols for DNA sample library construction. Here we report the development of a semi-automated sample preparation protocol to produce adaptor-ligated fragment libraries. Using a liquid-handling robot in conjunction with Carboxy Terminated Magnetic Beads, we labeled each library sample using a unique 6 bp DNA barcode, which allowed multiplex sample processing and sequencing of 32 libraries in a single run using Applied Biosystems' SOLiD sequencer. We applied our semi-automated pipeline to targeted medical resequencing of nuclear candidate genes in individuals affected by mitochondrial disorders. This novel method is capable of preparing as much as 32 DNA libraries in 2.01 days (8-hour workday) for emulsion PCR/high throughput DNA sequencing, increasing sample preparation production by 8-fold.
Full Text Available Abstract Background Pathogen diagnostic assays based on polymerase chain reaction (PCR technology provide high sensitivity and specificity. However, the design of these diagnostic assays is computationally intensive, requiring high-throughput methods to identify unique PCR signatures in the presence of an ever increasing availability of sequenced genomes. Results We present the Tool for PCR Signature Identification (TOPSI, a high-performance computing pipeline for the design of PCR-based pathogen diagnostic assays. The TOPSI pipeline efficiently designs PCR signatures common to multiple bacterial genomes by obtaining the shared regions through pairwise alignments between the input genomes. TOPSI successfully designed PCR signatures common to 18 Staphylococcus aureus genomes in less than 14 hours using 98 cores on a high-performance computing system. Conclusions TOPSI is a computationally efficient, fully integrated tool for high-throughput design of PCR signatures common to multiple bacterial genomes. TOPSI is freely available for download at http://www.bhsai.org/downloads/topsi.tar.gz.
Butler, Mark C.; Itotia, Patrick N.
Purpose. High-throughput techniques are needed to identify and optimize novel photodynamic therapy (PDT) agents with greater efficacy and to lower toxicity. Novel agents with the capacity to completely ablate pathologic angiogenesis could be of substantial utility in diseases such as wet age-related macular degeneration (AMD). Methods. An instrument and approach was developed based on light-emitting diode (LED) technology for high-throughput screening (HTS) of libraries of potential chemical and biological photosensitizing agents. Ninety-six-well LED arrays were generated at multiple wavelengths and under rigorous intensity control. Cell toxicity was measured in 96-well culture arrays with the nuclear dye SYTOX Green (Invitrogen-Molecular Probes, Eugene, OR). Results. Rapid screening of photoactivatable chemicals or biological molecules has been realized in 96-well arrays of cultured human cells. This instrument can be used to identify new PDT agents that exert cell toxicity on presentation of light of the appropriate energy. The system is further demonstrated through determination of the dose dependence of model compounds having or lacking cellular phototoxicity. Killer Red (KR), a genetically encoded red fluorescent protein expressed from transfected plasmids, is examined as a potential cellular photosensitizing agent and offers unique opportunities as a cell-type–specific phototoxic protein. Conclusions. This instrument has the capacity to screen large chemical or biological libraries for rapid identification and optimization of potential novel phototoxic lead candidates. KR and its derivatives have unique potential in ocular gene therapy for pathologic angiogenesis or tumors. PMID:19834043
Matthew D MacManes
Full Text Available The widespread and rapid adoption of high-throughput sequencing technologies has afforded researchers the opportunity to gain a deep understanding of genome level processes that underlie evolutionary change, and perhaps more importantly, the links between genotype and phenotype. In particular, researchers interested in functional biology and adaptation have used these technologies to sequence mRNA transcriptomes of specific tissues, which in turn are often compared to other tissues, or other individuals with different phenotypes. While these techniques are extremely powerful, careful attention to data quality is required. In particular, because high-throughput sequencing is more error-prone than traditional Sanger sequencing, quality trimming of sequence reads should be an important step in all data processing pipelines. While several software packages for quality trimming exist, no general guidelines for the specifics of trimming have been developed. Here, using empirically derived sequence data, I provide general recommendations regarding the optimal strength of trimming, specifically in mRNA-Seq studies. Although very aggressive quality trimming is common, this study suggests that a more gentle trimming, specifically of those nucleotides whose Phred score < 2 or < 5, is optimal for most studies across a wide variety of metrics.
Full Text Available Ganciclovir and valganciclor are antiviral agents used for the treatment of cytomegalovirus retinitis. The conventional method for administering ganciclovir in cytomegalovirus retinitis patients is repeated intravitreal injections. In order to obviate the possible detrimental effects of repeated intraocular injections, to improve compliance and to eliminate systemic side-effects, we investigated the tuning of the ganciclovir pro-drug valganciclovir and the release from thin films of poly(lactic-co-glycolic acid (PLGA, polycaprolactone (PCL, or mixtures of both, as a step towards prototyping periocular valganciclovir implants. To investigate the drug release, we established and evaluated a high throughput fluorescence-based quantification screening assay for the detection of valganciclovir. Our protocol allows quantifying as little as 20 ng of valganciclovir in 96-well polypropylene plates and a 50× faster analysis compared to traditional HPLC measurements. This improvement can hence be extrapolated to other polyester matrix thin film formulations using a high-throughput approach. The acidic microenvironment within the polyester matrix was found to protect valganciclovir from degradation with resultant increases in the half-life of the drug in the periocular implant to 100 days. Linear release profiles were obtained using the pure polyester polymers for 10 days and 60 days formulations; however, gross phase separations of PCL and acid-terminated PLGA prevented tuning within these timeframes due to the phase separation of the polymer, valganciclovir, or both.
Manna, F; Gallet, R; Martin, G; Lenormand, T
The development of high-throughput fitness measurement methods provides unprecedented power to test evolutionary theories. However, with this comes new challenges regarding data quality and data analysis. We illustrate this by reanalysing the fitness distribution in several environments of yeast mutants (homo- and heterozygous) from the yeast deletion project. Originally created to study functional properties of genes, evolutionary biologists took advantage of this database to study evolutionary questions, such as dominance for fitness of mutations. We uncover several problems in this data set strongly affecting these questions that have remained unnoticed despite the numerous studies based on it. High-throughput methodologies are necessarily challenging, both experimentally and for data analysis: our point is not to criticize these approaches, but to pinpoint these challenges and to propose several improvements that may help avoid several shortcomings. Further, in the light of this finding, we question the conclusions regarding theories of dominance that have been made using this data set. We show that the data on deletion of small effects are not sufficiently reliable to be informative on this question. On the other hand, deletions of large effect exhibit no correlation between homo- and heterozygous fitness effects, a pattern that sheds new light on the h-s correlation issue, with several consequences for the debate over the different theories of dominance. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
Ortega-Rivas, Antonio; Padrón, José M; Valladares, Basilio; Elsheikha, Hany M
Despite significant public health impact, there is no specific antiprotozoal therapy for prevention and treatment of Acanthamoeba castellanii infection. There is a need for new and efficient anti-Acanthamoeba drugs that are less toxic and can reduce treatment duration and frequency of administration. In this context a new, rapid and sensitive assay is required for high-throughput activity testing and screening of new therapeutic compounds. A colorimetric assay based on sulforhodamine B (SRB) staining has been developed for anti-Acanthamoeba drug susceptibility testing and adapted to a 96-well microtiter plate format. Under these conditions chlorhexidine was tested to validate the assay using two clinical strains of A. castellanii (Neff strain, T4 genotype [IC 50 4.68±0.6μM] and T3 genotype [IC 50 5.69±0.9μM]). These results were in good agreement with those obtained by the conventional Alamar Blue assay, OCR cytotoxicity assay and manual cell counting method. Our new assay offers an inexpensive and reliable method, which complements current assays by enhancing high-throughput anti-Acanthamoeba drug screening capabilities. Copyright © 2016 Elsevier B.V. All rights reserved.
Richmond, Gregory S.; Khine, Htet; Zhou, Tina T.; Ryan, Daniel E.; Brand, Tony; McBride, Mary T.; Killeen, Kevin
Multiplexed detection assays that analyze a modest number of nucleic acid targets over large sample sets are emerging as the preferred testing approach in such applications as routine pathogen typing, outbreak monitoring, and diagnostics. However, very few DNA testing platforms have proven to offer a solution for mid-plexed analysis that is high-throughput, sensitive, and with a low cost per test. In this work, an enhanced genotyping method based on MassCode technology was devised and integrated as part of a high-throughput mid-plexing analytical system that facilitates robust qualitative differential detection of DNA targets. Samples are first analyzed using MassCode PCR (MC-PCR) performed with an array of primer sets encoded with unique mass tags. Lambda exonuclease and an array of MassCode probes are then contacted with MC-PCR products for further interrogation and target sequences are specifically identified. Primer and probe hybridizations occur in homogeneous solution, a clear advantage over micro- or nanoparticle suspension arrays. The two cognate tags coupled to resultant MassCode hybrids are detected in an automated process using a benchtop single quadrupole mass spectrometer. The prospective value of using MassCode probe arrays for multiplexed bioanalysis was demonstrated after developing a 14plex proof of concept assay designed to subtype a select panel of Salmonella enterica serogroups and serovars. This MassCode system is very flexible and test panels can be customized to include more, less, or different markers. PMID:21544191
Gregory S Richmond
Full Text Available Multiplexed detection assays that analyze a modest number of nucleic acid targets over large sample sets are emerging as the preferred testing approach in such applications as routine pathogen typing, outbreak monitoring, and diagnostics. However, very few DNA testing platforms have proven to offer a solution for mid-plexed analysis that is high-throughput, sensitive, and with a low cost per test. In this work, an enhanced genotyping method based on MassCode technology was devised and integrated as part of a high-throughput mid-plexing analytical system that facilitates robust qualitative differential detection of DNA targets. Samples are first analyzed using MassCode PCR (MC-PCR performed with an array of primer sets encoded with unique mass tags. Lambda exonuclease and an array of MassCode probes are then contacted with MC-PCR products for further interrogation and target sequences are specifically identified. Primer and probe hybridizations occur in homogeneous solution, a clear advantage over micro- or nanoparticle suspension arrays. The two cognate tags coupled to resultant MassCode hybrids are detected in an automated process using a benchtop single quadrupole mass spectrometer. The prospective value of using MassCode probe arrays for multiplexed bioanalysis was demonstrated after developing a 14plex proof of concept assay designed to subtype a select panel of Salmonella enterica serogroups and serovars. This MassCode system is very flexible and test panels can be customized to include more, less, or different markers.
Full Text Available To facilitate high-throughput proteomic analyses we have developed a modified FASP protocol which improves the rate at which protein samples can be processed prior to mass spectrometry. Adapting the original FASP protocol to a 96-well format necessitates extended spin times for buffer exchange due to the low centrifugation speeds tolerated by these devices. However, by using 96-well plates with a more robust polyethersulfone molecular weight cutoff membrane, instead of the cellulose membranes typically used in these devices, we could use isopropanol as a wetting agent, decreasing spin times required for buffer exchange from an hour to 30 minutes. In a typical work flow used in our laboratory this equates to a reduction of 3 hours per plate, providing processing times similar to FASP for the processing of up to 96 samples per plate. To test whether our modified protocol produced similar results to FASP and other FASP-like protocols we compared the performance of our modified protocol to the original FASP and the more recently described eFASP and MStern-blot. We show that all FASP-like methods, including our modified protocol, display similar performance in terms of proteins identified and reproducibility. Our results show that our modified FASP protocol is an efficient method for the high-throughput processing of protein samples for mass spectral analysis.
Julie D. Thompson
Full Text Available The recent availability of the complete genome sequences of a large number of model organisms, together with the immense amount of data being produced by the new high-throughput technologies, means that we can now begin comparative analyses to understand the mechanisms involved in the evolution of the genome and their consequences in the study of biological systems. Phylogenetic approaches provide a unique conceptual framework for performing comparative analyses of all this data, for propagating information between different systems and for predicting or inferring new knowledge. As a result, phylogeny-based inference systems are now playing an increasingly important role in most areas of high throughput genomics, including studies of promoters (phylogenetic footprinting, interactomes (based on the presence and degree of conservation of interacting proteins, and in comparisons of transcriptomes or proteomes (phylogenetic proximity and co-regulation/co-expression. Here we review the recent developments aimed at making automatic, reliable phylogeny-based inference feasible in large-scale projects. We also discuss how evolutionary concepts and phylogeny-based inference strategies are now being exploited in order to understand the evolution and function of biological systems. Such advances will be fundamental for the success of the emerging disciplines of systems biology and synthetic biology, and will have wide-reaching effects in applied ﬁelds such as biotechnology, medicine and pharmacology.
Reid, Alex; Evans, Fiona; Mulholland, Vincent; Cole, Yvonne; Pickup, Jon
Potato cyst nematode (PCN) is a damaging soilborne pest of potatoes which can cause major crop losses. In 2010, a new European Union directive (2007/33/EC) on the control of PCN came into force. Under the new directive, seed potatoes can only be planted on land which has been found to be free from PCN infestation following an official soil test. A major consequence of the new directive was the introduction of a new harmonized soil sampling rate resulting in a threefold increase in the number of samples requiring testing. To manage this increase with the same staffing resources, we have replaced the traditional diagnostic methods. A system has been developed for the processing of soil samples, extraction of DNA from float material, and detection of PCN by high-throughput real-time PCR. Approximately 17,000 samples are analyzed each year using this method. This chapter describes the high-throughput processes for the production of float material from soil samples, DNA extraction from the entire float, and subsequent detection and identification of PCN within these samples.
Rouiller, Yolande; Périlleux, Arnaud; Collet, Natacha; Jordan, Martin; Stettler, Matthieu; Broly, Hervé
An innovative high-throughput medium development method based on media blending was successfully used to improve the performance of a Chinese hamster ovary fed-batch medium in shaking 96-deepwell plates. Starting from a proprietary chemically-defined medium, 16 formulations testing 43 of 47 components at 3 different levels were designed. Media blending was performed following a custom-made mixture design of experiments considering binary blends, resulting in 376 different blends that were tested during both cell expansion and fed-batch production phases in one single experiment. Three approaches were chosen to provide the best output of the large amount of data obtained. A simple ranking of conditions was first used as a quick approach to select new formulations with promising features. Then, prediction of the best mixes was done to maximize both growth and titer using the Design Expert software. Finally, a multivariate analysis enabled identification of individual potential critical components for further optimization. Applying this high-throughput method on a fed-batch, rather than on a simple batch, process opens new perspectives for medium and feed development that enables identification of an optimized process in a short time frame.
Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.
Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.
Younger, David; Berger, Stephanie; Baker, David; Klavins, Eric
High-throughput methods for screening protein-protein interactions enable the rapid characterization of engineered binding proteins and interaction networks. While existing approaches are powerful, none allow quantitative library-on-library characterization of protein interactions in a modifiable extracellular environment. Here, we show that sexual agglutination of Saccharomyces cerevisiae can be reprogrammed to link interaction strength with mating efficiency using synthetic agglutination (SynAg). Validation of SynAg with 89 previously characterized interactions shows a log-linear relationship between mating efficiency and protein binding strength for interactions with K d s ranging from below 500 pM to above 300 μM. Using induced chromosomal translocation to pair barcodes representing binding proteins, thousands of distinct interactions can be screened in a single pot. We demonstrate the ability to characterize protein interaction networks in a modifiable environment by introducing a soluble peptide that selectively disrupts a subset of interactions in a representative network by up to 800-fold. SynAg enables the high-throughput, quantitative characterization of protein-protein interaction networks in a fully defined extracellular environment at a library-on-library scale.
Tong, Ziqiu; Rajeev, Gayathri; Guo, Keying; Ivask, Angela; McCormick, Scott; Lombi, Enzo; Priest, Craig; Voelcker, Nicolas H
With the advances in nanotechnology, particles with various size, shape, surface chemistry and composition can be easily produced. Nano- and microparticles have been extensively explored in many industrial and clinical applications. Ensuring that the particles themselves are not possessing any toxic effects to the biological system is of paramount importance. This paper describes a proof of concept method in which a microfluidic system is used in conjunction with a cell microarray technique aiming to streamline the analysis of particle-cell interaction in a high throughput manner. Polymeric microparticles, with different particle surface functionalities, were firstly used to investigate the efficiency of particle-cell adhesion under dynamic flow. Silver nanoparticles (AgNPs,10 nm in diameter) perfused at different concentrations (0 to 20 μg/ml) in parallel streams over the cells in the microchannel exhibited higher toxicity compared to the static culture in the 96 well plate format. This developed microfluidic system can be easily scaled up to accommodate larger number of microchannels for high throughput analysis of potential toxicity of a wide range of particles in a single experiment.
Shin, Jong; Phelan, Paul J.; Chhum, Panharith; Bashkenova, Nazym; Yim, Sung; Parker, Robert [Department of Developmental, Molecular and Chemical Biology, Tufts University School of Medicine, Boston, MA 02111 (United States); Gagnon, David [Institut de Recherches Cliniques de Montreal (IRCM), 110 Pine Avenue West, Montreal, Quebec, Canada H2W 1R7 (Canada); Department of Biochemistry and Molecular Medicine, Université de Montréal, Montréal, Quebec (Canada); Gjoerup, Ole [Molecular Oncology Research Institute, Tufts Medical Center, Boston, MA 02111 (United States); Archambault, Jacques [Institut de Recherches Cliniques de Montreal (IRCM), 110 Pine Avenue West, Montreal, Quebec, Canada H2W 1R7 (Canada); Department of Biochemistry and Molecular Medicine, Université de Montréal, Montréal, Quebec (Canada); Bullock, Peter A., E-mail: Peter.Bullock@tufts.edu [Department of Developmental, Molecular and Chemical Biology, Tufts University School of Medicine, Boston, MA 02111 (United States)
Progressive Multifocal Leukoencephalopathy (PML) is caused by lytic replication of JC virus (JCV) in specific cells of the central nervous system. Like other polyomaviruses, JCV encodes a large T-antigen helicase needed for replication of the viral DNA. Here, we report the development of a luciferase-based, quantitative and high-throughput assay of JCV DNA replication in C33A cells, which, unlike the glial cell lines Hs 683 and U87, accumulate high levels of nuclear T-ag needed for robust replication. Using this assay, we investigated the requirement for different domains of T-ag, and for specific sequences within and flanking the viral origin, in JCV DNA replication. Beyond providing validation of the assay, these studies revealed an important stimulatory role of the transcription factor NF1 in JCV DNA replication. Finally, we show that the assay can be used for inhibitor testing, highlighting its value for the identification of antiviral drugs targeting JCV DNA replication. - Highlights: • Development of a high-throughput screening assay for JCV DNA replication using C33A cells. • Evidence that T-ag fails to accumulate in the nuclei of established glioma cell lines. • Evidence that NF-1 directly promotes JCV DNA replication in C33A cells. • Proof-of-concept that the HTS assay can be used to identify pharmacological inhibitor of JCV DNA replication.
Full Text Available This paper mainly focuses on pass logic based design, which gives an low Energy Per Instruction (EPI and high throughput COrdinate Rotation Digital Computer (CORDIC cell for application of robotic exploration. The basic components of CORDIC cell namely register, multiplexer and proposed adder is designed using pass transistor logic (PTL design. The proposed adder is implemented in bit-parallel iterative CORDIC circuit whereas designed using DSCH2 VLSI CAD tool and their layouts are generated by Microwind 3 VLSI CAD tool. The propagation delay, area and power dissipation are calculated from the simulated results for proposed adder based CORDIC cell. The EPI, throughput and effect of temperature are calculated from generated layout. The output parameter of generated layout is analysed using BSIM4 advanced analyzer. The simulated result of the proposed adder based CORDIC circuit is compared with other adder based CORDIC circuits. From the analysis of these simulated results, it was found that the proposed adder based CORDIC circuit dissipates low power, gives faster response, low EPI and high throughput.
Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.
Full Text Available Abstract Background High-throughput measurement technologies produce data sets that have the potential to elucidate the biological impact of disease, drug treatment, and environmental agents on humans. The scientific community faces an ongoing challenge in the analysis of these rich data sources to more accurately characterize biological processes that have been perturbed at the mechanistic level. Here, a new approach is built on previous methodologies in which high-throughput data was interpreted using prior biological knowledge of cause and effect relationships. These relationships are structured into network models that describe specific biological processes, such as inflammatory signaling or cell cycle progression. This enables quantitative assessment of network perturbation in response to a given stimulus. Results Four complementary methods were devised to quantify treatment-induced activity changes in processes described by network models. In addition, companion statistics were developed to qualify significance and specificity of the results. This approach is called Network Perturbation Amplitude (NPA scoring because the amplitudes of treatment-induced perturbations are computed for biological network models. The NPA methods were tested on two transcriptomic data sets: normal human bronchial epithelial (NHBE cells treated with the pro-inflammatory signaling mediator TNFα, and HCT116 colon cancer cells treated with the CDK cell cycle inhibitor R547. Each data set was scored against network models representing different aspects of inflammatory signaling and cell cycle progression, and these scores were compared with independent measures of pathway activity in NHBE cells to verify the approach. The NPA scoring method successfully quantified the amplitude of TNFα-induced perturbation for each network model when compared against NF-κB nuclear localization and cell number. In addition, the degree and specificity to which CDK
Full Text Available Abstract Background Mutational inactivation of plant genes is an essential tool in gene function studies. Plants with inactivated or deleted genes may also be exploited for crop improvement if such mutations/deletions produce a desirable agronomical and/or quality phenotype. However, the use of mutational gene inactivation/deletion has been impeded in polyploid plant species by genetic redundancy, as polyploids contain multiple copies of the same genes (homoeologous genes encoded by each of the ancestral genomes. Similar to many other crop plants, bread wheat (Triticum aestivum L. is polyploid; specifically allohexaploid possessing three progenitor genomes designated as 'A', 'B', and 'D'. Recently modified TILLING protocols have been developed specifically for mutation detection in wheat. Whilst extremely powerful in detecting single nucleotide changes and small deletions, these methods are not suitable for detecting whole gene deletions. Therefore, high-throughput methods for screening of candidate homoeologous gene deletions are needed for application to wheat populations generated by the use of certain mutagenic agents (e.g. heavy ion irradiation that frequently generate whole-gene deletions. Results To facilitate the screening for specific homoeologous gene deletions in hexaploid wheat, we have developed a TaqMan qPCR-based method that allows high-throughput detection of deletions in homoeologous copies of any gene of interest, provided that sufficient polymorphism (as little as a single nucleotide difference amongst homoeologues exists for specific probe design. We used this method to identify deletions of individual TaPFT1 homoeologues, a wheat orthologue of the disease susceptibility and flowering regulatory gene PFT1 in Arabidopsis. This method was applied to wheat nullisomic-tetrasomic lines as well as other chromosomal deletion lines to locate the TaPFT1 gene to the long arm of chromosome 5. By screening of individual DNA samples from
Full Text Available Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila. We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per, Clock (Clk, and cycle (cyc. The locomotor sensitization that is present in timeless (tim and pigment dispersing factor (pdf mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor
Chin Chun Ooi
Full Text Available Single-cell characterization techniques, such as mRNA-seq, have been applied to a diverse range of applications in cancer biology, yielding great insight into mechanisms leading to therapy resistance and tumor clonality. While single-cell techniques can yield a wealth of information, a common bottleneck is the lack of throughput, with many current processing methods being limited to the analysis of small volumes of single cell suspensions with cell densities on the order of 107 per mL. In this work, we present a high-throughput full-length mRNA-seq protocol incorporating a magnetic sifter and magnetic nanoparticle-antibody conjugates for rare cell enrichment, and Smart-seq2 chemistry for sequencing. We evaluate the efficiency and quality of this protocol with a simulated circulating tumor cell system, whereby non-small-cell lung cancer cell lines (NCI-H1650 and NCI-H1975 are spiked into whole blood, before being enriched for single-cell mRNA-seq by EpCAM-functionalized magnetic nanoparticles and the magnetic sifter. We obtain high efficiency (> 90% capture and release of these simulated rare cells via the magnetic sifter, with reproducible transcriptome data. In addition, while mRNA-seq data is typically only used for gene expression analysis of transcriptomic data, we demonstrate the use of full-length mRNA-seq chemistries like Smart-seq2 to facilitate variant analysis of expressed genes. This enables the use of mRNA-seq data for differentiating cells in a heterogeneous population by both their phenotypic and variant profile. In a simulated heterogeneous mixture of circulating tumor cells in whole blood, we utilize this high-throughput protocol to differentiate these heterogeneous cells by both their phenotype (lung cancer versus white blood cells, and mutational profile (H1650 versus H1975 cells, in a single sequencing run. This high-throughput method can help facilitate single-cell analysis of rare cell populations, such as circulating tumor
Traumatic joint injuries initiate acute degenerative changes in articular cartilage that can lead to progressive loss of load-bearing function. As a result, patients often develop post-traumatic osteoarthritis (PTOA), a condition for which there currently exists no biologic interventions. To address this need, tissue engineering aims to mimic the structure and function of healthy, native counterparts. These constructs can be used to not only replace degenerated tissue, but also build in vitro, pre-clinical models of disease. Towards this latter goal, this thesis focuses on the design of a high throughput system to screen new therapeutics in a micro-engineered model of PTOA, and the development of a mechanically-responsive drug delivery system to augment tissue-engineered approaches for cartilage repair. High throughput screening is a powerful tool for drug discovery that can be adapted to include 3D tissue constructs. To facilitate this process for cartilage repair, we built a high throughput mechanical injury platform to create an engineered cartilage model of PTOA. Compressive injury of functionally mature constructs increased cell death and proteoglycan loss, two hallmarks of injury observed in vivo. Comparison of this response to that of native cartilage explants, and evaluation of putative therapeutics, validated this model for subsequent use in small molecule screens. A primary screen of 118 compounds identified a number of 'hits' and relevant pathways that may modulate pathologic signaling post-injury. To complement this process of therapeutic discovery, a stimuli-responsive delivery system was designed that used mechanical inputs as the 'trigger' mechanism for controlled release. The failure thresholds of these mechanically-activated microcapsules (MAMCs) were influenced by physical properties and composition, as well as matrix mechanical properties in 3D environments. TGF-beta released from the system upon mechano-activation stimulated stem cell
Full Text Available Abstract Background With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow. Results A laboratory information management system (LIMS has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat genotyping data from the legume (chickpea, groundnut and pigeonpea and cereal (sorghum and millets crops of importance in the semi-arid tropics. Conclusion A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping
Full Text Available Abstract Background High-throughput screening is used by the pharmaceutical industry for identifying lead compounds that interact with targets of pharmacological interest. Because of the key role that aberrant regulation of protein phosphorylation plays in diseases such as cancer, diabetes and hypertension, kinases have become one of the main drug targets. With the exception of antibody-based assays, methods to screen for specific kinase activity are generally restricted to the use of small synthetic peptides as substrates. However, the use of natural protein substrates has the advantage that potential inhibitors can be detected that affect enzyme activity by binding to a site other than the catalytic site. We have previously reported a non-radioactive and non-antibody-based fluorescence quench assay for detection of phosphorylation or dephosphorylation using synthetic peptide substrates. The aim of this work is to develop an assay for detection of phosphorylation of chemically unmodified proteins based on this polymer superquenching platform. Results Using a modified QTL Lightspeed™ assay, phosphorylation of native protein was quantified by the interaction of the phosphorylated proteins with metal-ion coordinating groups co-located with fluorescent polymer deposited onto microspheres. The binding of phospho-protein inhibits a dye-labeled "tracer" peptide from associating to the phosphate-binding sites present on the fluorescent microspheres. The resulting inhibition of quench generates a "turn on" assay, in which the signal correlates with the phosphorylation of the substrate. The assay was tested on three different proteins: Myelin Basic Protein (MBP, Histone H1 and Phosphorylated heat- and acid-stable protein (PHAS-1. Phosphorylation of the proteins was detected by Protein Kinase Cα (PKCα and by the Interleukin -1 Receptor-associated Kinase 4 (IRAK4. Enzyme inhibition yielded IC50 values that were comparable to those obtained using
Sun, Shangpeng; Li, Changying; Paterson, Andrew H; Jiang, Yu; Xu, Rui; Robertson, Jon S; Snider, John L; Chee, Peng W
Plant breeding programs and a wide range of plant science applications would greatly benefit from the development of in-field high throughput phenotyping technologies. In this study, a terrestrial LiDAR-based high throughput phenotyping system was developed. A 2D LiDAR was applied to scan plants from overhead in the field, and an RTK-GPS was used to provide spatial coordinates. Precise 3D models of scanned plants were reconstructed based on the LiDAR and RTK-GPS data. The ground plane of the 3D model was separated by RANSAC algorithm and a Euclidean clustering algorithm was applied to remove noise generated by weeds. After that, clean 3D surface models of cotton plants were obtained, from which three plot-level morphologic traits including canopy height, projected canopy area, and plant volume were derived. Canopy height ranging from 85th percentile to the maximum height were computed based on the histogram of the z coordinate for all measured points; projected canopy area was derived by projecting all points on a ground plane; and a Trapezoidal rule based algorithm was proposed to estimate plant volume. Results of validation experiments showed good agreement between LiDAR measurements and manual measurements for maximum canopy height, projected canopy area, and plant volume, with R 2 -values of 0.97, 0.97, and 0.98, respectively. The developed system was used to scan the whole field repeatedly over the period from 43 to 109 days after planting. Growth trends and growth rate curves for all three derived morphologic traits were established over the monitoring period for each cultivar. Overall, four different cultivars showed similar growth trends and growth rate patterns. Each cultivar continued to grow until ~88 days after planting, and from then on varied little. However, the actual values were cultivar specific. Correlation analysis between morphologic traits and final yield was conducted over the monitoring period. When considering each cultivar individually
Full Text Available Abstract Background Microsatellite (simple sequence repeat – SSR and single nucleotide polymorphism (SNP markers are two types of important genetic markers useful in genetic mapping and genotyping. Often, large-scale genomic research projects require high-throughput computer-assisted primer design. Numerous such web-based or standard-alone programs for PCR primer design are available but vary in quality and functionality. In particular, most programs lack batch primer design capability. Such a high-throughput software tool for designing SSR flanking primers and SNP genotyping primers is increasingly demanded. Results A new web primer design program, BatchPrimer3, is developed based on Primer3. BatchPrimer3 adopted the Primer3 core program as a major primer design engine to choose the best primer pairs. A new score-based primer picking module is incorporated into BatchPrimer3 and used to pick position-restricted primers. BatchPrimer3 v1.0 implements several types of primer designs including generic primers, SSR primers together with SSR detection, and SNP genotyping primers (including single-base extension primers, allele-specific primers, and tetra-primers for tetra-primer ARMS PCR, as well as DNA sequencing primers. DNA sequences in FASTA format can be batch read into the program. The basic information of input sequences, as a reference of parameter setting of primer design, can be obtained by pre-analysis of sequences. The input sequences can be pre-processed and masked to exclude and/or include specific regions, or set targets for different primer design purposes as in Primer3Web and primer3Plus. A tab-delimited or Excel-formatted primer output also greatly facilitates the subsequent primer-ordering process. Thousands of primers, including wheat conserved intron-flanking primers, wheat genome-specific SNP genotyping primers, and Brachypodium SSR flanking primers in several genome projects have been designed using the program and validated
The characterization of mechanical properties in a combinatorial and high-throughput workflow has been a bottleneck that reduced the speed of the materials development process. High-throughput characterization of the mechanical properties was applied in this research in order to reduce the amount of sample handling and to accelerate the output. A puncture tester was designed and built to evaluate the toughness of materials using an innovative template design coupled with automation. The test is in the form of a circular free-film indentation. A single template contains 12 samples which are tested in a rapid serial approach. Next, the operational principles of a novel parallel dynamic mechanical-thermal analysis instrument were analyzed in detail for potential sources of errors. The test uses a model of a circular bilayer fixed-edge plate deformation. A total of 96 samples can be analyzed simultaneously which provides a tremendous increase in efficiency compared with a conventional dynamic test. The modulus values determined by the system had considerable variation. The errors were observed and improvements to the system were made. A finite element analysis was used to analyze the accuracy given by the closed-form solution with respect to testing geometries, such as thicknesses of the samples. A good control of the thickness of the sample was proven to be crucial to the accuracy and precision of the output. Then, the attempt to correlate the high-throughput experiments and conventional coating testing methods was made. Automated nanoindentation in dynamic mode was found to provide information on the near-surface modulus and could potentially correlate with the pendulum hardness test using the loss tangent component. Lastly, surface characterization of stratified siloxane-polyurethane coatings was carried out with X-ray photoelectron spectroscopy, Rutherford backscattering spectroscopy, transmission electron microscopy, and nanoindentation. The siloxane component
Fanzio, Paola; Cagliani, Alberto; Peterffy, Kristof G.
The patterning of conductive polymers is a major challenge in the implementation of these materials in several research and industrial applications, spanning from photovoltaics to biosensors. Within this context, we have developed a reliable technique to pattern a thin layer of the conductive...... polymer poly(3,4-ethylenedioxythiophene) (PEDOT) by means of a low cost and high throughput soft embossing process. We were able to reproduce a functional conductive pattern with a minimum dimension of 1 Î¼m and to fabricate electrically decoupled electrodes. Moreover, the conductivity of the PEDOT films...... has been characterized, finding that a post-processing treatment with Ethylene Glycol allows an increase in conductivity and a decrease in water solubility of the PEDOT film. Finally, cyclic voltammetry demonstrates that the post-treatment also ensures the electrochemical activity of the film. Our...
Full Text Available Cognitive Radio Vehicular Ad-hoc Networks (CR-VANETs exploit cognitive radios to allow vehicles to access the unused channels in their radio environment. Thus, CR-VANETs do not only suffer the traditional CR problems, especially spectrum sensing, but also suffer new challenges due to the highly dynamic nature of VANETs. In this paper, we present a low-delay and high-throughput radio environment assessment scheme for CR-VANETs that can be easily incorporated with the IEEE 802.11p standard developed for VANETs. Simulation results show that the proposed scheme significantly reduces the time to get the radio environment map and increases the CR-VANET throughput.
High-throughput microbial electrolysis cells (MECs) were used to perform treatability studies on many different refinery wastewater samples all having appreciably different characteristics, which resulted in large differences in current generation. A de-oiled refinery wastewater sample from one site (DOW1) produced the best results, with 2.1±0.2A/m2 (maximum current density), 79% chemical oxygen demand removal, and 82% headspace biological oxygen demand removal. These results were similar to those obtained using domestic wastewater. Two other de-oiled refinery wastewater samples also showed good performance, with a de-oiled oily sewer sample producing less current. A stabilization lagoon sample and a stripped sour wastewater sample failed to produce appreciable current. Electricity production, organics removal, and startup time were improved when the anode was first acclimated to domestic wastewater. These results show mini-MECs are an effective method for evaluating treatability of different wastewaters. © 2013 Elsevier Ltd.
Pardo, Isabel; Camarero, Susana
In this chapter we describe several high-throughput screening assays for the evaluation of mutant libraries for the directed evolution of fungal laccases in the yeast Saccharomyces cerevisiae. The assays are based on the direct oxidation of three syringyl-type phenols derived from lignin (sinapic acid, acetosyringone, and syringaldehyde), an artificial laccase mediator (violuric acid), and three organic synthetic dyes (Methyl Orange, Evans Blue, and Remazol Brilliant Blue). While the assays with the natural phenols can be used for laccases with low redox potential, the rest are exclusive for high-redox potential laccases. In fact, the violuric acid assay is devised as a method to ascertain that the high-redox potential of laccase is not lost during directed evolution.
Petrak, B.; Peiris, M.; Muller, A.
We describe a simple and inexpensive optical ring interferometer for use in high-resolution spectral analysis and filtering. It consists of a solid cuboid, reflection-coated on two opposite sides, in which constructive interference occurs for waves in a rhombic trajectory. Due to its monolithic design, the interferometer’s resonance frequencies are insensitive to environmental disturbances over time. Additional advantages are its simplicity of alignment, high-throughput, and feedback-free operation. If desired, it can be stabilized with a secondary laser without disturbance of the primary signal. We illustrate the use of the interferometer for the measurement of the spectral Mollow triplet from a quantum dot and characterize its long-term stability for filtering applications
Tran, Thi-Nguyen-Ny; Signoli, Michel; Fozzati, Luigi; Aboudharam, Gérard; Raoult, Didier; Drancourt, Michel
Background Historical records suggest that multiple burial sites from the 14th–16th centuries in Venice, Italy, were used during the Black Death and subsequent plague epidemics. Methodology/Principal Findings High throughput, multiplexed real-time PCR detected DNA of seven highly transmissible pathogens in 173 dental pulp specimens collected from 46 graves. Bartonella quintana DNA was identified in five (2.9%) samples, including three from the 16th century and two from the 15th century, and Yersinia pestis DNA was detected in three (1.7%) samples, including two from the 14th century and one from the 16th century. Partial glpD gene sequencing indicated that the detected Y. pestis was the Orientalis biotype. Conclusions These data document for the first time successive plague epidemics in the medieval European city where quarantine was first instituted in the 14th century. PMID:21423736
Akbari, Samin; Pirbodaghi, Tohid; Kamm, Roger D; Hammond, Paula T
Biocompatible microparticles are valuable tools in biomedical research for applications such as drug delivery, cell transplantation therapy, and analytical assays. However, their translation into clinical research and the pharmaceutical industry has been slow due to the lack of techniques that can produce microparticles with controlled physicochemical properties at high throughput. We introduce a robust microfluidic platform for the production of relatively homogeneous microdroplets at a generation frequency of up to 3.1 MHz, which is about three orders of magnitude higher than the production rate of a conventional microfluidic drop maker. We demonstrated the successful implementation of our device for production of biocompatible microparticles with various crosslinking mechanisms and cell microencapsulation with high cell viability.
Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
To increase production efficiency in the manufacture of infrared focal plane components, test techniques were refined to enhance testing throughput and accuracy. The result is an integrated package of high performance hardware and software tools which performs well in high throughput production environments. The test system is also very versatile. It has been used for readout (multiplexer) device characterization, room temperature automated wafer probing, and focal plane array (FPA) testing. Tests have been performed using electrical and radiometric optical stimulus. An integrated, convenient software package was developed and is used to acquire, reduce, analyze, display, and archive test data. The test software supports fully automated operation for the production environment, as well as menu-driven operation for R&D, characterization and setup purposes. Trade-offs between handling techniques in cryogenic production testing were investigated. " atch processing" is preferred over "continuous flow", primarily due to considerations of contamination of the cryogenic environment.
Knight, R.; Hamady, M.; Liu, Z.; Lozupone, C.
High-throughput sequencing techniques such as 454 are straining the limits of tools traditionally used to build trees, choose OTUs, and perform other essential sequencing tasks. We have developed a workflow for phylogenetic analysis of large-scale sequence data sets that combines existing tools, such as the Arb phylogeny package and the NAST multiple sequence alignment tool, with new methods for choosing and clustering OTUs and for performing phylogenetic community analysis with UniFrac. This talk discusses the cyberinfrastructure we are developing to support the human microbiome project, and the application of these workflows to analyze very large data sets that contrast the gut microbiota with a range of physical environments. These tools will ultimately help to define core and peripheral microbiomes in a range of environments, and will allow us to understand the physical and biotic factors that contribute most to differences in microbial diversity.
Full Text Available This paper presents a new multioutput and high throughput pseudorandom number generator. The scheme is to make the homogenized Logistic chaotic sequence as unified hyperchaotic system parameter. So the unified hyperchaos can transfer in different chaotic systems and the output can be more complex with the changing of homogenized Logistic chaotic output. Through processing the unified hyperchaotic 4-way outputs, the output will be extended to 26 channels. In addition, the generated pseudorandom sequences have all passed NIST SP800-22 standard test and DIEHARD test. The system is designed in Verilog HDL and experimentally verified on a Xilinx Spartan 6 FPGA for a maximum throughput of 16.91 Gbits/s for the native chaotic output and 13.49 Gbits/s for the resulting pseudorandom number generators.
Sapkota, Rumakanta; Nicolaisen, Mogens
Many of the plant diseases caused by oomycetes such as cavity spot and damping off involve a complex of several species emphasizing the need to use a community approach when studying these organisms. Despite the economically importance of plant pathogens such as Phytophthora and Pythium, we have...... limited understanding of the diversity of oomycetes in symptomatic plant tissue as well as in root zones. The aim of this study was to improve and validate techniques for using high throughput sequencing as a tool for studying oomycete communities. Primer sets ITS4, ITS6 and ITS7 that have been used...... communities, DNA extracted from carrot tissue samples with symptoms of Pythium infection and soil samples collected from agricultural fields. Sequence data from Pythium and Phytophthora mock communities showed that our strategy successfully detected all included species. Taxonomic assignments of operational...
Çakir, Seda; Bauters, Erwin; Rivero, Guadalupe; Parasote, Tom; Paul, Johan; Du Prez, Filip E
The synthesis of microcapsules via in situ polymerization is a labor-intensive and time-consuming process, where many composition and process factors affect the microcapsule formation and its morphology. Herein, we report a novel combinatorial technique for the preparation of melamine-formaldehyde microcapsules, using a custom-made and automated high-throughput platform (HTP). After performing validation experiments for ensuring the accuracy and reproducibility of the novel platform, a design of experiment study was performed. The influence of different encapsulation parameters was investigated, such as the effect of the surfactant, surfactant type, surfactant concentration and core/shell ratio. As a result, this HTP-platform is suitable to be used for the synthesis of different types of microcapsules in an automated and controlled way, allowing the screening of different reaction parameters in a shorter time compared to the manual synthetic techniques.
Jolliet, O.; Fantke, Peter; Huang, L.
in indoor air (Little et al., 2012; Liu et al., 2013), but they do not well account for SVOC sorption into indoor surfaces and absorption into human skins (Huang et al., 2017). Thus a more comprehensive simplified solution is needed for SVOCs . For personal Care Products, a mass balance model that accounts......Models are becoming increasingly available to model near-field fate and exposure, but not all are suited for high throughput. This presentation evaluates the available models for modeling exposure to chemicals in cosmetics, cleaning products, food contact and building materials. It assesses...... for skin permeation and volatilization as competing processes and that requires a limited number of readily available physiochemical properties would be suitable for LCA and HTS purposes. Thus, the multi-pathway exposure model for chemicals in cosmetics developed by Ernstoff et al.constitutes a suitable...
The ease of genetic manipulation, low cost, rapid growth and number of previous studies have made Escherichia coli one of the most widely used microorganism species for producing recombinant proteins. In this post-genomic era, challenges remain to rapidly express and purify large numbers of proteins for academic and commercial purposes in a high-throughput manner. In this review, we describe several state-of-the-art approaches that are suitable for the cloning, expression and purification, conducted in parallel, of numerous molecules, and we discuss recent progress related to soluble protein expression, mRNA folding, fusion tags, post-translational modification and production of membrane proteins. Moreover, we address the ongoing efforts to overcome various challenges faced in protein expression in E. coli, which could lead to an improvement of the current system from trial and error to a predictable and rational design. PMID:27581654
Jia, Baolei; Jeon, Che Ok
The ease of genetic manipulation, low cost, rapid growth and number of previous studies have made Escherichia coli one of the most widely used microorganism species for producing recombinant proteins. In this post-genomic era, challenges remain to rapidly express and purify large numbers of proteins for academic and commercial purposes in a high-throughput manner. In this review, we describe several state-of-the-art approaches that are suitable for the cloning, expression and purification, conducted in parallel, of numerous molecules, and we discuss recent progress related to soluble protein expression, mRNA folding, fusion tags, post-translational modification and production of membrane proteins. Moreover, we address the ongoing efforts to overcome various challenges faced in protein expression in E. coli, which could lead to an improvement of the current system from trial and error to a predictable and rational design. © 2016 The Authors.
Keiti Oliveira Alessio
Full Text Available Reference materials used in quality control laboratories need to maintain homogeneity and stability that can be affected by temperature, humidity and microbial activity. This research aimed to investigate a rapid, cheap and high-throughput procedure, using microwave radiation for the inhibition of microbial growth in peat reference sample, in order to increase its stability. Different exposure times (60, 120 and 180 seconds of the material to radiation were evaluated, and the microbiological tests were conducted to assess the percentage reduction in CFUs for fungi and bacteria. The results showed 100% and >90% of reduction in growth for bacterial and fungi, respectively, over 90 days of monitoring the material, using 120 to 180 seconds of microwave radiation exposure times. The results demonstrate that this method is economical and efficient at stabilizing peat reference materials. DOI: http://dx.doi.org/10.17807/orbital.v9i5.942
Full Text Available In this manuscript, we describe the identification of highly pathogenic bacteria using an assay coupling biothreat group-specific PCR with electrospray ionization mass spectrometry (PCR/ESI-MS run on an Ibis PLEX-ID high-throughput platform. The biothreat cluster assay identifies most of the potential bioterrorism-relevant microorganisms including Bacillus anthracis, Francisella tularensis, Yersinia pestis, Burkholderia mallei and pseudomallei, Brucella species, and Coxiella burnetii. DNA from 45 different reference materials with different formulations and different concentrations were chosen and sent to a service screening laboratory that uses the PCR/ESI-MS platform to provide a microbial identification service. The standard reference materials were produced out of a repository built up in the framework of the EU funded project "Establishment of Quality Assurances for Detection of Highly Pathogenic Bacteria of Potential Bioterrorism Risk" (EQADeBa. All samples were correctly identified at least to the genus level.
Duellmann, D.; Surdy, K.; Menichetti, L.; Toebbicke, R.
The statistical analysis of infrastructure metrics comes with several specific challenges, including the fairly large volume of unstructured metrics from a large set of independent data sources. Hadoop and Spark provide an ideal environment in particular for the first steps of skimming rapidly through hundreds of TB of low relevance data to find and extract the much smaller data volume that is relevant for statistical analysis and modelling. This presentation will describe the new Hadoop service at CERN and the use of several of its components for high throughput data aggregation and ad-hoc pattern searches. We will describe the hardware setup used, the service structure with a small set of decoupled clusters and the first experience with co-hosting different applications and performing software upgrades. We will further detail the common infrastructure used for data extraction and preparation from continuous monitoring and database input sources.
Treindl, Fridolin; Ruprecht, Benjamin; Beiter, Yvonne; Schultz, Silke; Döttinger, Anette; Staebler, Annette; Joos, Thomas O.; Kling, Simon; Poetz, Oliver; Fehm, Tanja; Neubauer, Hans; Kuster, Bernhard; Templin, Markus F.
Dissecting cellular signalling requires the analysis of large number of proteins. The DigiWest approach we describe here transfers the western blot to a bead-based microarray platform. By combining gel-based protein separation with immobilization on microspheres, hundreds of replicas of the initial blot are created, thus enabling the comprehensive analysis of limited material, such as cells collected by laser capture microdissection, and extending traditional western blotting to reach proteomic scales. The combination of molecular weight resolution, sensitivity and signal linearity on an automated platform enables the rapid quantification of hundreds of specific proteins and protein modifications in complex samples. This high-throughput western blot approach allowed us to identify and characterize alterations in cellular signal transduction that occur during the development of resistance to the kinase inhibitor Lapatinib, revealing major changes in the activation state of Ephrin-mediated signalling and a central role for p53-controlled processes. PMID:27659302
Klenke, Robert H.; Sleeman, W. C., IV; Motter, Mark A.
There are numerous autopilot systems that are commercially available for small (UAVs. However, they all share several key disadvantages for conducting aerodynamic research, chief amongst which is the fact that most utilize older, slower, 8- or 16-bit microcontroller technologies. This paper describes the development and testing of a flight control system (FCS) for small UAV s based on a modern, high throughput, embedded processor. In addition, this FCS platform contains user-configurable hardware resources in the form of a Field Programmable Gate Array (FPGA) that can be used to implement custom, application-specific hardware. This hardware can be used to off-load routine tasks such as sensor data collection, from the FCS processor thereby further increasing the computational throughput of the system.
Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John
The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Liu, Rong; Hassan, Taimur; Rallo, Robert; Cohen, Yoram
The increasing utilization of high-throughput screening (HTS) in toxicity studies of engineered nano-materials (ENMs) requires tools for rapid and reliable processing and analyses of large HTS datasets. In order to meet this need, a web-based platform for HTS data analyses tools (HDAT) was developed that provides statistical methods suitable for ENM toxicity data. As a publicly available computational nanoinformatics infrastructure, HDAT provides different plate normalization methods, various HTS summarization statistics, self-organizing map (SOM)-based clustering analysis, and visualization of raw and processed data using both heat map and SOM. HDAT has been successfully used in a number of HTS studies of ENM toxicity, thereby enabling analysis of toxicity mechanisms and development of structure–activity relationships for ENM toxicity. The online approach afforded by HDAT should encourage standardization of and future advances in HTS as well as facilitate convenient inter-laboratory comparisons of HTS datasets. (paper)
Allentoft, Morten Erik; Schuster, Stephan C.; Holdaway, Richard N.
Genetic variation in microsatellites is rarely examined in the field of ancient DNA (aDNA) due to the low quantity of nuclear DNA in the fossil record together with the lack of characterized nuclear markers in extinct species. 454 sequencing platforms provide a new high-throughput technology...... capable of generating up to 1 gigabases per run as short (200-400-bp) read lengths. 454 data were generated from the fossil bone of an extinct New Zealand moa (Aves: Dinornithiformes). We identified numerous short tandem repeat (STR) motifs, and here present the successful isolation and characterization...... of one polymorphic microsatellite (Moa_MS2). Primers designed to flank this locus amplified all three moa species tested here. The presented method proved to be a fast and efficient way of identifying microsatellite markers in ancient DNA templates and, depending on biomolecule preservation, has...
Bornhop, Darryl J. (Inventor); Dotson, Stephen (Inventor); Bachmann, Brian O. (Inventor)
A polarimetry technique for measuring optical activity that is particularly suited for high throughput screening employs a chip or substrate (22) having one or more microfluidic channels (26) formed therein. A polarized laser beam (14) is directed onto optically active samples that are disposed in the channels. The incident laser beam interacts with the optically active molecules in the sample, which slightly alter the polarization of the laser beam as it passes multiple times through the sample. Interference fringe patterns (28) are generated by the interaction of the laser beam with the sample and the channel walls. A photodetector (34) is positioned to receive the interference fringe patterns and generate an output signal that is input to a computer or other analyzer (38) for analyzing the signal and determining the rotation of plane polarized light by optically active material in the channel from polarization rotation calculations.
Chen, Ing-Chien; Chiu, Yi-Kai; Yu, Chung-Ming; Lee, Cheng-Chung; Tung, Chao-Ping; Tsou, Yueh-Liang; Huang, Yi-Jen; Lin, Chia-Lung; Chen, Hong-Sen; Wang, Andrew H-J; Yang, An-Suei
Pandemic and epidemic outbreaks of influenza A virus (IAV) infection pose severe challenges to human society. Passive immunotherapy with recombinant neutralizing antibodies can potentially mitigate the threats of IAV infection. With a high throughput neutralizing antibody discovery platform, we produced artificial anti-hemagglutinin (HA) IAV-neutralizing IgGs from phage-displayed synthetic scFv libraries without necessitating prior memory of antibody-antigen interactions or relying on affinity maturation essential for in vivo immune systems to generate highly specific neutralizing antibodies. At least two thirds of the epitope groups of the artificial anti-HA antibodies resemble those of natural protective anti-HA antibodies, providing alternatives to neutralizing antibodies from natural antibody repertoires. With continuing advancement in designing and constructing synthetic scFv libraries, this technological platform is useful in mitigating not only the threats of IAV pandemics but also those from other newly emerging viral infections.
Duarte, José M; Barbier, Içvara; Schaerli, Yolanda
Synthetic biologists increasingly rely on directed evolution to optimize engineered biological systems. Applying an appropriate screening or selection method for identifying the potentially rare library members with the desired properties is a crucial step for success in these experiments. Special challenges include substantial cell-to-cell variability and the requirement to check multiple states (e.g., being ON or OFF depending on the input). Here, we present a high-throughput screening method that addresses these challenges. First, we encapsulate single bacteria into microfluidic agarose gel beads. After incubation, they harbor monoclonal bacterial microcolonies (e.g., expressing a synthetic construct) and can be sorted according their fluorescence by fluorescence activated cell sorting (FACS). We determine enrichment rates and demonstrate that we can measure the average fluorescent signals of microcolonies containing phenotypically heterogeneous cells, obviating the problem of cell-to-cell variability. Finally, we apply this method to sort a pBAD promoter library at ON and OFF states.
Oellers, Tobias; König, Dennis; Kostka, Aleksander; Xie, Shenqie; Brugger, Jürgen; Ludwig, Alfred
The scaling behavior of Ti-Ni-Cu shape memory thin-film micro- and nanowires of different geometry is investigated with respect to its influence on the martensitic transformation properties. Two processes for the high-throughput fabrication of Ti-Ni-Cu micro- to nanoscale thin film wire libraries and the subsequent investigation of the transformation properties are reported. The libraries are fabricated with compositional and geometrical (wire width) variations to investigate the influence of these parameters on the transformation properties. Interesting behaviors were observed: Phase transformation temperatures change in the range from 1 to 72 °C (austenite finish, (A f ), 13 to 66 °C (martensite start, M s ) and the thermal hysteresis from -3.5 to 20 K. It is shown that a vanishing hysteresis can be achieved for special combinations of sample geometry and composition.
Schückel, Julia; Kracun, Stjepan Kresimir; Willats, William George Tycho
Carbohydrates active enzymes (CAZymes) have multiple roles in vivo and are widely used for industrial processing in the biofuel, textile, detergent, paper and food industries. A deeper understanding of CAZymes is important from both fundamental biology and industrial standpoints. Vast numbers...... of CAZymes exist in nature (especially in microorganisms) and hundreds of thousands have been cataloged and described in the carbohydrate active enzyme database (CAZy). However, the rate of discovery of putative enzymes has outstripped our ability to biochemically characterize their activities. One reason...... for this is that advances in genome and transcriptome sequencing, together with associated bioinformatics tools allow for rapid identification of candidate CAZymes, but technology for determining an enzyme's biochemical characteristics has advanced more slowly. To address this technology gap, a novel high-throughput assay...
Kebschull, Justus M; Garcia da Silva, Pedro; Reid, Ashlan P; Peikon, Ian D; Albeanu, Dinu F; Zador, Anthony M
Neurons transmit information to distant brain regions via long-range axonal projections. In the mouse, area-to-area connections have only been systematically mapped using bulk labeling techniques, which obscure the diverse projections of intermingled single neurons. Here we describe MAPseq (Multiplexed Analysis of Projections by Sequencing), a technique that can map the projections of thousands or even millions of single neurons by labeling large sets of neurons with random RNA sequences ("barcodes"). Axons are filled with barcode mRNA, each putative projection area is dissected, and the barcode mRNA is extracted and sequenced. Applying MAPseq to the locus coeruleus (LC), we find that individual LC neurons have preferred cortical targets. By recasting neuroanatomy, which is traditionally viewed as a problem of microscopy, as a problem of sequencing, MAPseq harnesses advances in sequencing technology to permit high-throughput interrogation of brain circuits. Copyright © 2016 Elsevier Inc. All rights reserved.
Sjostrom, Staffan L.; Bai, Yunpeng; Huang, Mingtao
A high-throughput method for single cell screening by microfluidic droplet sorting is applied to a whole-genome mutated yeast cell library yielding improved production hosts of secreted industrial enzymes. The sorting method is validated by enriching a yeast strain 14 times based on its α......-amylase production, close to the theoretical maximum enrichment. Furthermore, a 105 member yeast cell library is screened yielding a clone with a more than 2-fold increase in α-amylase production. The increase in enzyme production results from an improvement of the cellular functions of the production host...... in contrast to previous droplet-based directed evolution that has focused on improving enzyme protein structure. In the workflow presented, enzyme producing single cells are encapsulated in 20 pL droplets with a fluorogenic reporter substrate. The coupling of a desired phenotype (secreted enzyme concentration...
Romanowsky, Mark B; Abate, Adam R; Rotem, Assaf; Holtze, Christian; Weitz, David A
Double emulsions are useful templates for microcapsules and complex particles, but no method yet exists for making double emulsions with both high uniformity and high throughput. We present a parallel numbering-up design for microfluidic double emulsion devices, which combines the excellent control of microfluidics with throughput suitable for mass production. We demonstrate the design with devices incorporating up to 15 dropmaker units in a two-dimensional or three-dimensional array, producing single-core double emulsion drops at rates over 1 kg day(-1) and with diameter variation less than 6%. This design provides a route to integrating hundreds of dropmakers or more in a single chip, facilitating industrial-scale production rates of many tons per year.
Fabrice P A David
Full Text Available The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.
Full Text Available Classically, thermal noise has been the workhorse of satellite communications due to the long distances to be covered between the satellite and the user terminal (UT. Lately, LDPC (Low-Density Parity-Check codes allow the noise threshold to be set very close to the Shannon limit for the memory-less satellite channel; thus, solving the noise problem that turbo codes were not able to solve. However, recently, the high target rates in next generation 5G wireless terrestrial system are pushing the required spectral efficiency in Satellite Communications; therefore, shifting the SatCom paradigm towards an interference limited one. This paper revisits the 5G scene and the role of next generation satellite communications, with a special focus on high throughput satellites (HTS together with the future accompanying MIMO interference mitigation techniques.
Su, Ran; Xiong, Sijing; Zink, Daniele; Loo, Lit-Hsin
The kidney is a major target for xenobiotics, which include drugs, industrial chemicals, environmental toxicants and other compounds. Accurate methods for screening large numbers of potentially nephrotoxic xenobiotics with diverse chemical structures are currently not available. Here, we describe an approach for nephrotoxicity prediction that combines high-throughput imaging of cultured human renal proximal tubular cells (PTCs), quantitative phenotypic profiling, and machine learning methods. We automatically quantified 129 image-based phenotypic features, and identified chromatin and cytoskeletal features that can predict the human in vivo PTC toxicity of 44 reference compounds with ~82 % (primary PTCs) or 89 % (immortalized PTCs) test balanced accuracies. Surprisingly, our results also revealed that a DNA damage response is commonly induced by different PTC toxicants that have diverse chemical structures and injury mechanisms. Together, our results show that human nephrotoxicity can be predicted with high efficiency and accuracy by combining cell-based and computational methods that are suitable for automation.
Yu, J S; Ongarello, S; Fiedler, R; Chen, X W; Toffolo, G; Cobelli, C; Trajanoski, Z
High-throughput and high-resolution mass spectrometry instruments are increasingly used for disease classification and therapeutic guidance. However, the analysis of immense amount of data poses considerable challenges. We have therefore developed a novel method for dimensionality reduction and tested on a published ovarian high-resolution SELDI-TOF dataset. We have developed a four-step strategy for data preprocessing based on: (1) binning, (2) Kolmogorov-Smirnov test, (3) restriction of coefficient of variation and (4) wavelet analysis. Subsequently, support vector machines were used for classification. The developed method achieves an average sensitivity of 97.38% (sd = 0.0125) and an average specificity of 93.30% (sd = 0.0174) in 1000 independent k-fold cross-validations, where k = 2, ..., 10. The software is available for academic and non-commercial institutions.
Castro-Osma, José A; Comerford, James W; Heyn, Richard H; North, Michael; Tangstad, Elisabeth
High throughput methodologies screened 81 different metal salts and metal salt combinations as catalysts for the carboxylation of propylene glycol to propylene carbonate, as compared to a 5 mol% Zn(OAc)2/p-chlorobenzene sulfonic acid benchmark catalyst. The reactions were run with added acetonitrile (MeCN) as a chemical water trap. Two new catalysts were thereby discovered, zinc trifluoromethanesulfonate (Zn(OTf)2) and zinc p-toluenesulfonate. The optimal reaction parameters for the former catalyst were screened. Zn(OTf)2 gave an overall propylene carbonate yield of greater than 50% in 24 h, twice as large as the previous best literature yield with MeCN as a water trap, with 69% selectivity and 75% conversion of propylene glycol at 145 °C and 50 bar CO2 pressure.
Vinogradov, Alexander A; Gates, Zachary P; Zhang, Chi; Quartararo, Anthony J; Halloran, Kathryn H; Pentelute, Bradley L
A methodology to achieve high-throughput de novo sequencing of synthetic peptide mixtures is reported. The approach leverages shotgun nanoliquid chromatography coupled with tandem mass spectrometry-based de novo sequencing of library mixtures (up to 2000 peptides) as well as automated data analysis protocols to filter away incorrect assignments, noise, and synthetic side-products. For increasing the confidence in the sequencing results, mass spectrometry-friendly library designs were developed that enabled unambiguous decoding of up to 600 peptide sequences per hour while maintaining greater than 85% sequence identification rates in most cases. The reliability of the reported decoding strategy was additionally confirmed by matching fragmentation spectra for select authentic peptides identified from library sequencing samples. The methods reported here are directly applicable to screening techniques that yield mixtures of active compounds, including particle sorting of one-bead one-compound libraries and affinity enrichment of synthetic library mixtures performed in solution.
Bai, Yang; Ji, Shufan; Jiang, Qinghua; Wang, Yadong
The emergence of next-generation high-throughput RNA sequencing (RNA-Seq) provides tremendous opportunities for researchers to analyze alternative splicing on a genome-wide scale. However, accurate identification of alternative splicing events from RNA-Seq data has remained an unresolved challenge in next-generation sequencing (NGS) studies. Identifying exon skipping (ES) events is an essential part in genome-wide alternative splicing event identification. In this paper, we propose a novel method ESFinder, a random forest classifier to identify ES events from RNA-Seq data. ESFinder conducts thorough studies on predicting features and figures out proper features according to their relevance for ES event identification. Experimental results on real human skeletal muscle and brain RNA-Seq data show that ESFinder could effectively predict ES events with high predictive accuracy. The codes of ESFinder are available at http://mlg.hit.edu.cn/ybai/ES/ESFinder.html.
Motivation: A review of the available single nucleotide polymorphism (SNP) calling procedures for Illumina high-throughput sequencing (HTS) platform data reveals that most rely mainly on base-calling and mapping qualities as sources of error when calling SNPs. Thus, errors not involved in base-calling or alignment, such as those in genomic sample preparation, are not accounted for.Results: A novel method of consensus and SNP calling, Genotype Model Selection (GeMS), is given which accounts for the errors that occur during the preparation of the genomic sample. Simulations and real data analyses indicate that GeMS has the best performance balance of sensitivity and positive predictive value among the tested SNP callers. © The Author 2012. Published by Oxford University Press. All rights reserved.
Knudsen, Peter Boldsen
-up to economically viable industrial processes. Accurate quantitative assessment of cellular performance is required for the evaluation of the overall suitability of a microorganism as an industrial cell factory, ensuring that not only product, but also process parameters are optimised. With the increasing number...... of strains generated through genetic engineering programmes, the traditionally applied methods for strain characterisation, which are typically labour intensive and time consuming, have become somewhat limited due to throughput capacity. Unfortunately, most high throughput methods only provide low levels...... of information compared to larger scale cultivations, explaining why these systems have not been broadly implemented. The overall aim of the thesis was, therefore, to shift this paradigm towards higher throughput systems for assessment of cellular performance with a higher level of information. This was pursued...
Gamba, Cristina; Hanghøj, Kristian Ebbesen; Gaunitz, Charleen
The DNA molecules that can be extracted from archaeological and palaeontological remains are often degraded and massively contaminated with environmental microbial material. This reduces the efficacy of shotgun approaches for sequencing ancient genomes, despite the decreasing sequencing costs...... of high-throughput sequencing (HTS). Improving the recovery of endogenous molecules from the DNA extraction and purification steps could, thus, help advance the characterization of ancient genomes. Here, we apply the three most commonly used DNA extraction methods to five ancient bone samples spanning...... a ~30 thousand year temporal range and originating from a diversity of environments, from South America to Alaska. We show that methods based on the purification of DNA fragments using silica columns are more advantageous than in solution methods and increase not only the total amount of DNA molecules...
In the face of advancing technology in combinatorial synthesis and high-throughput screening, the drug discovery process continues to evolve. Drug metabolism and pharmacokinetics (DMPK) studies play a key role in lead identification and optimization. This fast-paced development process has imposed an enormous burden on the analytical chemist to design faster and more sensitive assay techniques to aid the drug discovery and development. Various strategies aimed at increasing the throughput and reducing sample numbers in discovery DMPK have been developed for both in vitro and in vivo experiments. However, quantity and speed, often associated with technology development, do not always guarantee quality but a clear strategic focus in the spirit of 'Fit for Purpose' approach is required to implement systems to generate high-quality data and to drive research in new directions.
Hombrink, Pleun; Hadrup, Sine R; Bakker, Arne
-cell infusion difficult. This study represents the first attempt of genome-wide prediction of MiHA, coupled to the isolation of T-cell populations that react with these antigens. In this unbiased high-throughput MiHA screen, both the possibilities and pitfalls of this approach were investigated. First, 973...... polymorphic peptides expressed by hematopoietic stem cells were predicted and screened for HLA-A2 binding. Subsequently a set of 333 high affinity HLA-A2 ligands was identified and post transplantation samples from allo-SCT patients were screened for T-cell reactivity by a combination of p......MHC-tetramer-based enrichment and multi-color flow cytometry. Using this approach, 71 peptide-reactive T-cell populations were generated. The isolation of a T-cell line specifically recognizing target cells expressing the MAP4K1(IMA) antigen demonstrates that identification of MiHA through this approach is in principle...
Barankov, Roman; Mertz, Jerome
Imaging through a single optical fibre offers attractive possibilities in many applications such as micro-endoscopy or remote sensing. However, the direct transmission of an image through an optical fibre is difficult because spatial information is scrambled upon propagation. We demonstrate an image transmission strategy where spatial information is first converted to spectral information. Our strategy is based on a principle of spread-spectrum encoding, borrowed from wireless communications, wherein object pixels are converted into distinct spectral codes that span the full bandwidth of the object spectrum. Image recovery is performed by numerical inversion of the detected spectrum at the fibre output. We provide a simple demonstration of spread-spectrum encoding using Fabry-Perot etalons. Our technique enables the two-dimensional imaging of self-luminous (that is, incoherent) objects with high throughput in principle independent of pixel number. Moreover, it is insensitive to fibre bending, contains no moving parts and opens the possibility of extreme miniaturization.
Trifonov, Vladimir; Pasqualucci, Laura; Dalla-Favera, Riccardo; Rabadan, Raul
Recent developments in extracting and processing biological and clinical data are allowing quantitative approaches to studying living systems. High-throughput sequencing (HTS), expression profiles, proteomics, and electronic health records (EHR) are some examples of such technologies. Extracting meaningful information from those technologies requires careful analysis of the large volumes of data they produce. In this note, we present a set of fractal-like distributions that commonly appear in the analysis of such data. The first set of examples are drawn from a HTS experiment. Here, the distributions appear as part of the evaluation of the error rate of the sequencing and the identification of tumorogenic genomic alterations. The other examples are obtained from risk factor evaluation and analysis of relative disease prevalence and co-mordbidity as these appear in EHR. The distributions are also relevant to identification of subclonal populations in tumors and the study of quasi-species and intrahost diversity of viral populations.
Moy, Terence I; Conery, Annie L; Larkins-Ford, Jonah; Wu, Gang; Mazitschek, Ralph; Casadei, Gabriele; Lewis, Kim; Carpenter, Anne E; Ausubel, Frederick M
The nematode Caenorhabditis elegans is a unique whole animal model system for identifying small molecules with in vivo anti-infective properties. C. elegans can be infected with a broad range of human pathogens, including Enterococcus faecalis, an important human nosocomial pathogen. Here, we describe an automated, high-throughput screen of 37,200 compounds and natural product extracts for those that enhance survival of C. elegans infected with E. faecalis. Using a robot to dispense live, infected animals into 384-well plates and automated microscopy and image analysis, we identified 28 compounds and extracts not previously reported to have antimicrobial properties, including six structural classes that cure infected C. elegans animals but do not affect the growth of the pathogen in vitro, thus acting by a mechanism of action distinct from antibiotics currently in clinical use.
Tanger, Paul; Klassen, Stephen; Mojica, Julius P; Lovell, John T; Moyers, Brook T; Baraoidan, Marietta; Naredo, Maria Elizabeth B; McNally, Kenneth L; Poland, Jesse; Bush, Daniel R; Leung, Hei; Leach, Jan E; McKay, John K
To ensure food security in the face of population growth, decreasing water and land for agriculture, and increasing climate variability, crop yields must increase faster than the current rates. Increased yields will require implementing novel approaches in genetic discovery and breeding. Here we demonstrate the potential of field-based high throughput phenotyping (HTP) on a large recombinant population of rice to identify genetic variation underlying important traits. We find that detecting quantitative trait loci (QTL) with HTP phenotyping is as accurate and effective as traditional labor-intensive measures of flowering time, height, biomass, grain yield, and harvest index. Genetic mapping in this population, derived from a cross of an modern cultivar (IR64) with a landrace (Aswina), identified four alleles with negative effect on grain yield that are fixed in IR64, demonstrating the potential for HTP of large populations as a strategy for the second green revolution.
David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.
Zeng, Wei; Fisher, Alison L; Musson, Donald G; Wang, Amy Qiu
A novel method was developed and assessed to extend the lifetime of extraction columns of high-throughput liquid chromatography (HTLC) for bioanalysis of human plasma samples. In this method, a 15% acetic acid solution and 90% THF were respectively used as mobile phases to clean up the proteins in human plasma samples and residual lipids from the extraction and analytical columns. The 15% acetic acid solution weakens the interactions between proteins and the stationary phase of the extraction column and increases the protein solubility in the mobile phase. The 90% THF mobile phase prevents the accumulation of lipids and thus reduces the potential damage on the columns. Using this novel method, the extraction column lifetime has been extended to about 2000 direct plasma injections, and this is the first time that high concentration acetic acid and THF are used in HTLC for on-line cleanup and extraction column lifetime extension.
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)
Fabian J. Theis
Full Text Available Metabolomics is a relatively new high-throughput technology that aims at measuring all endogenous metabolites within a biological sample in an unbiased fashion. The resulting metabolic profiles may be regarded as functional signatures of the physiological state, and have been shown to comprise effects of genetic regulation as well as environmental factors. This potential to connect genotypic to phenotypic information promises new insights and biomarkers for different research fields, including biomedical and pharmaceutical research. In the statistical analysis of metabolomics data, many techniques from other omics fields can be reused. However recently, a number of tools specific for metabolomics data have been developed as well. The focus of this mini review will be on recent advancements in the analysis of metabolomics data especially by utilizing Gaussian graphical models and independent component analysis.
Al-Tamimi, Nadia Ali
High-throughput phenotyping produces multiple measurements over time, which require new methods of analyses that are flexible in their quantification of plant growth and transpiration, yet are computationally economic. Here we develop such analyses and apply this to a rice population genotyped with a 700k SNP high-density array. Two rice diversity panels, indica and aus, containing a total of 553 genotypes, are phenotyped in waterlogged conditions. Using cubic smoothing splines to estimate plant growth and transpiration, we identify four time intervals that characterize the early responses of rice to salinity. Relative growth rate, transpiration rate and transpiration use efficiency (TUE) are analysed using a new association model that takes into account the interaction between treatment (control and salt) and genetic marker. This model allows the identification of previously undetected loci affecting TUE on chromosome 11, providing insights into the early responses of rice to salinity, in particular into the effects of salinity on plant growth and transpiration.
Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G; Abell, Chris
Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production.
Hu, Binjie; Zhao, Fuju; Wang, Shiwen; Olszewski, Michal A; Bian, Haipeng; Wu, Yong; Kong, Mimi; Xu, Lingli; Miao, Yingxin; Fang, Yi; Yang, Changqing; Zhao, Hu; Zhang, Yanmei
We established a high-throughput multiplex genetic detection system (HMGS) for identification of Helicobacter pylori with concomitant analysis of virulence and drug resistance. Confirmed 132 H. pylori cultures from gastric biopsies were screened by 20-gene site-HMGS, sequencing and E-test. HMGS was highly sensitive and specific for H. pylori identification. Concordance rate between HMGS and sequencing averaged 94.5% (virulence genes) and 97.3% (resistance genes). Observed resistance rates to four mainstream antibiotics were high, except for amoxicillin. Significant association between virulence genotype and risks for specific gastrointestinal diseases was found for five genes. Metronidazole resistance in peptic ulcer patients was significantly higher. HMGS is an effective method for H. pylori identification and analysis of virulence and drug resistance.
Moriconi, Chiara; Palmieri, Valentina; Di Santo, Riccardo; Tornillo, Giusy; Papi, Massimiliano; Pilkington, Geoff; De Spirito, Marco; Gumbleton, Mark
Time-series image capture of in vitro 3D spheroidal cancer models embedded within an extracellular matrix affords examination of spheroid growth and cancer cell invasion. However, a customizable, comprehensive and open source solution for the quantitative analysis of such spheroid images is lacking. Here, the authors describe INSIDIA (INvasion SpheroID ImageJ Analysis), an open-source macro implemented as a customizable software algorithm running on the FIJI platform, that enables high-throughput high-content quantitative analysis of spheroid images (both bright-field gray and fluorescent images) with the output of a range of parameters defining the spheroid "tumor" core and its invasive characteristics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tran, Thi-Nguyen-Ny; Signoli, Michel; Fozzati, Luigi; Aboudharam, Gérard; Raoult, Didier; Drancourt, Michel
Historical records suggest that multiple burial sites from the 14th-16th centuries in Venice, Italy, were used during the Black Death and subsequent plague epidemics. High throughput, multiplexed real-time PCR detected DNA of seven highly transmissible pathogens in 173 dental pulp specimens collected from 46 graves. Bartonella quintana DNA was identified in five (2.9%) samples, including three from the 16th century and two from the 15th century, and Yersinia pestis DNA was detected in three (1.7%) samples, including two from the 14th century and one from the 16th century. Partial glpD gene sequencing indicated that the detected Y. pestis was the Orientalis biotype. These data document for the first time successive plague epidemics in the medieval European city where quarantine was first instituted in the 14th century.
Full Text Available BACKGROUND: Historical records suggest that multiple burial sites from the 14th-16th centuries in Venice, Italy, were used during the Black Death and subsequent plague epidemics. METHODOLOGY/PRINCIPAL FINDINGS: High throughput, multiplexed real-time PCR detected DNA of seven highly transmissible pathogens in 173 dental pulp specimens collected from 46 graves. Bartonella quintana DNA was identified in five (2.9% samples, including three from the 16th century and two from the 15th century, and Yersinia pestis DNA was detected in three (1.7% samples, including two from the 14th century and one from the 16th century. Partial glpD gene sequencing indicated that the detected Y. pestis was the Orientalis biotype. CONCLUSIONS: These data document for the first time successive plague epidemics in the medieval European city where quarantine was first instituted in the 14th century.
Linnemann, Carsten; Heemskerk, Bianca; Kvistborg, Pia
have developed a high-throughput DNA-based strategy to identify TCR sequences by the capture and sequencing of genomic DNA fragments encoding the TCR genes. We establish the value of this approach by assembling a large library of cancer germline tumor antigen-reactive TCRs. Furthermore, by exploiting......The transfer of T cell receptor (TCR) genes into patient T cells is a promising approach for the treatment of both viral infections and cancer. Although efficient methods exist to identify antibodies for the treatment of these diseases, comparable strategies to identify TCRs have been lacking. We...... of antigen specificities, which may be the first step toward the development of autologous TCR gene therapy to target patient-specific neoantigens in human cancer....
Littlefair, Joanne E; Clare, Elizabeth L
Society faces the complex challenge of supporting biodiversity and ecosystem functioning, while ensuring food security by providing safe traceable food through an ever-more-complex global food chain. The increase in human mobility brings the added threat of pests, parasites, and invaders that further complicate our agro-industrial efforts. DNA barcoding technologies allow researchers to identify both individual species, and, when combined with universal primers and high-throughput sequencing techniques, the diversity within mixed samples (metabarcoding). These tools are already being employed to detect market substitutions, trace pests through the forensic evaluation of trace "environmental DNA", and to track parasitic infections in livestock. The potential of DNA barcoding to contribute to increased security of the food chain is clear, but challenges remain in regulation and the need for validation of experimental analysis. Here, we present an overview of the current uses and challenges of applied DNA barcoding in agriculture, from agro-ecosystems within farmland to the kitchen table.
Bohr, Adam; Boetker, Johan; Wang, Yingya
printing, which enable a continuous and industrial scale production of nanocomplexes formed by electrostatic complexation, using the polymers poly(diallyldimethylammonium chloride) and poly(sodium 4-styrenesulfonate). Several parameters including polymer concentration, flow rate, and flow ratio were......3D printing allows a rapid and inexpensive manufacturing of custom made and prototype devices. Micromixers are used for rapid and controlled production of nanoparticles intended for therapeutic delivery. In this study, we demonstrate the fabrication of micromixers using computational design and 3D...... via bulk mixing. Moreover, each micromixer could process more than 2 liters per hour with unaffected performance and the setup could easily be scaled-up by aligning several micromixers in parallel. This demonstrates that 3D printing can be used to prepare disposable high-throughput micromixers...
Background High-throughput omics technologies have enabled the measurement of many genes or metabolites simultaneously. The resulting high dimensional experimental data poses significant challenges to transcriptomics and metabolomics data analysis methods, which may lead to spurious instead of biologically relevant results. One strategy to improve the results is the incorporation of prior biological knowledge in the analysis. This strategy is used to reduce the solution space and/or to focus the analysis on biological meaningful regions. In this article, we review a selection of these methods used in transcriptomics and metabolomics. We combine the reviewed methods in three groups based on the underlying mathematical model: exploratory methods, supervised methods and estimation of the covariance matrix. We discuss which prior knowledge has been used, how it is incorporated and how it modifies the mathematical properties of the underlying methods. PMID:25033193
Ghaderinezhad, Fariba; Amin, Reza; Temirel, Mikail; Yenilmez, Bekir; Wentworth, Adam; Tasoglu, Savas
Paper-based micro analytical devices offer significant advantages compared to the conventional microfluidic chips including cost-effectiveness, ease of fabrication, and ease of use while preserving critical features including strong capillary action and biological compatibility. In this work, we demonstrate an inexpensive, rapid method for high-throughput fabrication of paper-based microfluidics by patterning hydrophobic barriers using a desktop pen plotter integrated with a custom-made, low-cost paper feeder. We tested various types of commercial permanent markers and compared their water-resistant capabilities for creating hydrophobic barriers. Additionally, we studied the performance of markers with different types of paper, plotting speeds, and pattern dimensions. To verify the effectiveness of the presented fabrication method, colorimetric analysis was performed on the results of a glucose assay.
Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.
AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano
The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.
In this paper we describe the design, implementation and performance of Trans4SCIF, a user-level socket-like transport library for the Intel Xeon Phi coprocessor. Trans4SCIF library is primarily intended for high-throughput applications. It uses RDMA transfers over the native SCIF support, in a way that is transparent for the application, which has the illusion of using conventional stream sockets. We also discuss the integration of Trans4SCIF with the ZeroMQ messaging library, used extensively by several applications running at CERN. We show that this can lead to a substantial, up to 3x, increase of application throughput compared to the default TCP/IP transport option.
Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie
Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.
Goecke, Nicole Bakkegård; Krog, Jesper Schak; Hjulsager, Charlotte Kristiane
for the internal genes of IAV were validated and optimised to run under identical reaction conditions and assembled on a dynamic array chip (Fluidigm). Results. The sensitivity and specificity of the chip was assessed by testing cell culture isolates and field samples with known subtypes (based on sequencing...... test and subsequent subtyping is performed by real time RT-PCR (RT-qPCR) but several assays are needed to cover the wide range of circulating subtypes which is expensive,resource and time demanding. To mitigate these restrictions the high-throughput qPCR platform BioMark (Fluidigm) has been explored...... services with reduced price and turnover time which will facilitate choice of vaccines and by that lead to reduction of antibiotic used....
Cruz, F. J.; Hjort, K.
Inertial focusing is a phenomenon where particles migrate across streamlines in microchannels and focus at well-defined, size dependent equilibrium points of the cross section. It can be taken into advantage for focusing, separation and concentration of particles at high through-put and high efficiency. As particles decrease in size, smaller channels and higher pressures are needed. Hence, new designs are needed to decrease the pressure drop. In this work a novel design was adapted to focus and separate 1 µm from 3 µm spherical polystyrene particles. Also 0.5 µm spherical polystyrene particles were separated, although in a band instead of a single line. The ability to separate, concentrate and focus bacteria, its simplicity of use and high throughput make this technology a candidate for daily routines in laboratories and hospitals.
Ivanov, Delyan P; Grabowska, Anna M; Garnett, Martin C
Mainstream adoption of physiologically relevant three-dimensional models has been slow in the last 50 years due to long, manual protocols with poor reproducibility, high price, and closed commercial platforms. This chapter describes high-throughput, low-cost, open methods for spheroid viability assessment which use readily available reagents and open-source software to analyze spheroid volume, metabolism, and enzymatic activity. We provide two ImageJ macros for automated spheroid size determination-for both single images and images in stacks. We also share an Excel template spreadsheet allowing users to rapidly process spheroid size data, analyze plate uniformity (such as edge effects and systematic seeding errors), detect outliers, and calculate dose-response. The methods would be useful to researchers in preclinical and translational research planning to move away from simplistic monolayer studies and explore 3D spheroid screens for drug safety and efficacy without substantial investment in money or time.
Zingue, Dezemon; Bouam, Amar; Militello, Muriel; Drancourt, Michel
Mycobacterium ulcerans is a close derivative of Mycobacterium marinum and the agent of Buruli ulcer in some tropical countries. Epidemiological and environmental studies pointed towards stagnant water ecosystems as potential sources of M. ulcerans, yet the ultimate reservoirs remain elusive. We hypothesized that carbon substrate determination may help elucidating the spectrum of potential reservoirs. In a first step, high-throughput phenotype microarray Biolog was used to profile carbon substrates in one M. marinum and five M. ulcerans strains. A total of 131/190 (69%) carbon substrates were metabolized by at least one M. ulcerans strain, including 28/190 (15%) carbon substrates metabolized by all five M. ulcerans strains of which 21 substrates were also metabolized by M. marinum. In a second step, 131 carbon substrates were investigated, through a bibliographical search, for their known environmental sources including plants, fruits and vegetables, bacteria, algae, fungi, nematodes, mollusks, mammals, insects and the inanimate environment. This analysis yielded significant association of M. ulcerans with bacteria (p = 0.000), fungi (p = 0.001), algae (p = 0.003) and mollusks (p = 0.007). In a third step, the Medline database was cross-searched for bacteria, fungi, mollusks and algae as potential sources of carbon substrates metabolized by all tested M. ulcerans; it indicated that 57% of M. ulcerans substrates were associated with bacteria, 18% with alga, 11% with mollusks and 7% with fungi. This first report of high-throughput carbon substrate utilization by M. ulcerans would help designing media to isolate and grow this pathogen. Furthermore, the presented data suggest that potential M. ulcerans environmental reservoirs might be related to micro-habitats where bacteria, fungi, algae and mollusks are abundant. This should be followed by targeted investigations in Buruli ulcer endemic regions.
Taipa, M Angela
In the demanding field of proteomics, there is an urgent need for affinity-catcher molecules to implement effective and high throughput methods for analysing the human proteome or parts of it. Antibodies have an essential role in this endeavour, and selection, isolation and characterisation of specific antibodies represent a key issue to meet success. Alternatively, it is expected that new, well-characterised affinity reagents generated in rapid and cost-effective manners will also be used to facilitate the deciphering of the function, location and interactions of the high number of encoded protein products. Combinatorial approaches combined with high throughput screening (HTS) technologies have become essential for the generation and identification of robust affinity reagents from biological combinatorial libraries and the lead discovery of active/mimic molecules in large chemical libraries. Phage and yeast display provide the means for engineering a multitude of antibody-like molecules against any desired antigen. The construction of peptide libraries is commonly used for the identification and characterisation of ligand-receptor specific interactions, and the search for novel ligands for protein purification. Further improvement of chemical and biological resistance of affinity ligands encouraged the "intelligent" design and synthesis of chemical libraries of low-molecular-weight bio-inspired mimic compounds. No matter what the ligand source, selection and characterisation of leads is a most relevant task. Immunological assays, in microtiter plates, biosensors or microarrays, are a biological tool of inestimable value for the iterative screening of combinatorial ligand libraries for tailored specificities, and improved affinities. Particularly, enzyme-linked immunosorbent assays are frequently the method of choice in a large number of screening strategies, for both biological and chemical libraries.
Sampaio, Pedro N; Sales, Kevin C; Rosa, Filipa O; Lopes, Marta B; Calado, Cecília R C
To increase the knowledge of the recombinant cyprosin production process in Saccharomyces cerevisiae cultures, it is relevant to implement efficient bioprocess monitoring techniques. The present work focuses on the implementation of a mid-infrared (MIR) spectroscopy-based tool for monitoring the recombinant culture in a rapid, economic, and high-throughput (using a microplate system) mode. Multivariate data analysis on the MIR spectra of culture samples was conducted. Principal component analysis (PCA) enabled capturing the general metabolic status of the yeast cells, as replicated samples appear grouped together in the score plot and groups of culture samples according to the main growth phase can be clearly distinguished. The PCA-loading vectors also revealed spectral regions, and the corresponding chemical functional groups and biomolecules that mostly contributed for the cell biomolecular fingerprint associated with the culture growth phase. These data were corroborated by the analysis of the samples' second derivative spectra. Partial least square (PLS) regression models built based on the MIR spectra showed high predictive ability for estimating the bioprocess critical variables: biomass (R 2 = 0.99, RMSEP 2.8%); cyprosin activity (R 2 = 0.98, RMSEP 3.9%); glucose (R 2 = 0.93, RMSECV 7.2%); galactose (R 2 = 0.97, RMSEP 4.6%); ethanol (R 2 = 0.97, RMSEP 5.3%); and acetate (R 2 = 0.95, RMSEP 7.0%). In conclusion, high-throughput MIR spectroscopy and multivariate data analysis were effective in identifying the main growth phases and specific cyprosin production phases along the yeast culture as well as in quantifying the critical variables of the process. This knowledge will promote future process optimization and control the recombinant cyprosin bioprocess according to Quality by Design framework.
Park, Daniel Sang-Won; Chen, Pin-Chuan; You, Byoung Hee; Kim, Namwon; Park, Taehyun; Lee, Tae Yoon; Soper, Steven A; Nikitopoulos, Dimitris E; Murphy, Michael C; Datta, Proyag; Desta, Yohannes
A high throughput, multi-well (96) polymerase chain reaction (PCR) platform, based on a continuous flow (CF) mode of operation, was developed. Each CFPCR device was confined to a footprint of 8 × 8 mm 2 , matching the footprint of a well on a standard micro-titer plate. While several CFPCR devices have been demonstrated, this is the first example of a high-throughput multi-well continuous flow thermal reactor configuration. Verification of the feasibility of the multi-well CFPCR device was carried out at each stage of development from manufacturing to demonstrating sample amplification. The multi-well CFPCR devices were fabricated by micro-replication in polymers, polycarbonate to accommodate the peak temperatures during thermal cycling in this case, using double-sided hot embossing. One side of the substrate contained the thermal reactors and the opposite side was patterned with structures to enhance thermal isolation of the closely packed constant temperature zones. A 99 bp target from a λ-DNA template was successfully amplified in a prototype multi-well CFPCR device with a total reaction time as low as ∼5 min at a flow velocity of 3 mm s −1 (15.3 s cycle −1 ) and a relatively low amplification efficiency compared to a bench-top thermal cycler for a 20-cycle device; reducing the flow velocity to 1 mm s −1 (46.2 s cycle −1 ) gave a seven-fold improvement in amplification efficiency. Amplification efficiencies increased at all flow velocities for 25-cycle devices with the same configuration.
Ernstsen, Christina L; Login, Frédéric H; Jensen, Helene H; Nørregaard, Rikke; Møller-Jensen, Jakob; Nejsum, Lene N
To target bacterial pathogens that invade and proliferate inside host cells, it is necessary to design intervention strategies directed against bacterial attachment, cellular invasion and intracellular proliferation. We present an automated microscopy-based, fast, high-throughput method for analyzing size and number of intracellular bacterial colonies in infected tissue culture cells. Cells are seeded in 48-well plates and infected with a GFP-expressing bacterial pathogen. Following gentamicin treatment to remove extracellular pathogens, cells are fixed and cell nuclei stained. This is followed by automated microscopy and subsequent semi-automated spot detection to determine the number of intracellular bacterial colonies, their size distribution, and the average number per host cell. Multiple 48-well plates can be processed sequentially and the procedure can be completed in one working day. As a model we quantified intracellular bacterial colonies formed by uropathogenic Escherichia coli (UPEC) during infection of human kidney cells (HKC-8). Urinary tract infections caused by UPEC are among the most common bacterial infectious diseases in humans. UPEC can colonize tissues of the urinary tract and is responsible for acute, chronic, and recurrent infections. In the bladder, UPEC can form intracellular quiescent reservoirs, thought to be responsible for recurrent infections. In the kidney, UPEC can colonize renal epithelial cells and pass to the blood stream, either via epithelial cell disruption or transcellular passage, to cause sepsis. Intracellular colonies are known to be clonal, originating from single invading UPEC. In our experimental setup, we found UPEC CFT073 intracellular bacterial colonies to be heterogeneous in size and present in nearly one third of the HKC-8 cells. This high-throughput experimental format substantially reduces experimental time and enables fast screening of the intracellular bacterial load and cellular distribution of multiple
Tseng, Chao-Neng; Chang, Yung-Ting; Chiu, Hui-Tzu; Chou, Yii-Cheng; Huang, Hurng-Wern; Cheng, Chien-Chung; Liao, Ming-Hui; Chang, Hsueh-Wei
Most species of penguins are sexual monomorphic and therefore it is difficult to visually identify their genders for monitoring population stability in terms of sex ratio analysis. In this study, we evaluated the suitability using melting curve analysis (MCA) for high-throughput gender identification of penguins. Preliminary test indicated that the Griffiths's P2/P8 primers were not suitable for MCA analysis. Based on sequence alignment of Chromo-Helicase-DNA binding protein (CHD)-W and CHD-Z genes from four species of penguins (Pygoscelis papua, Aptenodytes patagonicus, Spheniscus magellanicus, and Eudyptes chrysocome), we redesigned forward primers for the CHD-W/CHD-Z-common region (PGU-ZW2) and the CHD-W-specific region (PGU-W2) to be used in combination with the reverse Griffiths's P2 primer. When tested with P. papua samples, PCR using P2/PGU-ZW2 and P2/PGU-W2 primer sets generated two amplicons of 148- and 356-bp, respectively, which were easily resolved in 1.5% agarose gels. MCA analysis indicated the melting temperature (Tm) values for P2/PGU-ZW2 and P2/PGU-W2 amplicons of P. papua samples were 79.75°C-80.5°C and 81.0°C-81.5°C, respectively. Females displayed both ZW-common and W-specific Tm peaks, whereas male was positive only for ZW-common peak. Taken together, our redesigned primers coupled with MCA analysis allows precise high throughput gender identification for P. papua, and potentially for other penguin species such as A. patagonicus, S. magellanicus, and E. chrysocome as well.
Karouia, Fathi; Peyvan, Kianoosh; Pohorille, Andrew
Space biotechnology is a nascent field aimed at applying tools of modern biology to advance our goals in space exploration. These advances rely on our ability to exploit in situ high throughput techniques for amplification and sequencing DNA, and measuring levels of RNA transcripts, proteins and metabolites in a cell. These techniques, collectively known as "omics" techniques have already revolutionized terrestrial biology. A number of on-going efforts are aimed at developing instruments to carry out "omics" research in space, in particular on board the International Space Station and small satellites. For space applications these instruments require substantial and creative reengineering that includes automation, miniaturization and ensuring that the device is resistant to conditions in space and works independently of the direction of the gravity vector. Different paths taken to meet these requirements for different "omics" instruments are the subjects of this review. The advantages and disadvantages of these instruments and technological solutions and their level of readiness for deployment in space are discussed. Considering that effects of space environments on terrestrial organisms appear to be global, it is argued that high throughput instruments are essential to advance (1) biomedical and physiological studies to control and reduce space-related stressors on living systems, (2) application of biology to life support and in situ resource utilization, (3) planetary protection, and (4) basic research about the limits on life in space. It is also argued that carrying out measurements in situ provides considerable advantages over the traditional space biology paradigm that relies on post-flight data analysis. Published by Elsevier Inc.
Full Text Available Experimental protein-protein interaction (PPI networks are increasingly being exploited in diverse ways for biological discovery. Accordingly, it is vital to discern their underlying natures by identifying and classifying the various types of deterministic (specific and probabilistic (nonspecific interactions detected. To this end, we have analyzed PPI networks determined using a range of high-throughput experimental techniques with the aim of systematically quantifying any biases that arise from the varying cellular abundances of the proteins. We confirm that PPI networks determined using affinity purification methods for yeast and Escherichia coli incorporate a correlation between protein degree, or number of interactions, and cellular abundance. The observed correlations are small but statistically significant and occur in both unprocessed (raw and processed (high-confidence data sets. In contrast, the yeast two-hybrid system yields networks that contain no such relationship. While previously commented based on mRNA abundance, our more extensive analysis based on protein abundance confirms a systematic difference between PPI networks determined from the two technologies. We additionally demonstrate that the centrality-lethality rule, which implies that higher-degree proteins are more likely to be essential, may be misleading, as protein abundance measurements identify essential proteins to be more prevalent than nonessential proteins. In fact, we generally find that when there is a degree/abundance correlation, the degree distributions of nonessential and essential proteins are also disparate. Conversely, when there is no degree/abundance correlation, the degree distributions of nonessential and essential proteins are not different. However, we show that essentiality manifests itself as a biological property in all of the yeast PPI networks investigated here via enrichments of interactions between essential proteins. These findings provide
Ivanic, Joseph; Yu, Xueping; Wallqvist, Anders; Reifman, Jaques
Experimental protein-protein interaction (PPI) networks are increasingly being exploited in diverse ways for biological discovery. Accordingly, it is vital to discern their underlying natures by identifying and classifying the various types of deterministic (specific) and probabilistic (nonspecific) interactions detected. To this end, we have analyzed PPI networks determined using a range of high-throughput experimental techniques with the aim of systematically quantifying any biases that arise from the varying cellular abundances of the proteins. We confirm that PPI networks determined using affinity purification methods for yeast and Escherichia coli incorporate a correlation between protein degree, or number of interactions, and cellular abundance. The observed correlations are small but statistically significant and occur in both unprocessed (raw) and processed (high-confidence) data sets. In contrast, the yeast two-hybrid system yields networks that contain no such relationship. While previously commented based on mRNA abundance, our more extensive analysis based on protein abundance confirms a systematic difference between PPI networks determined from the two technologies. We additionally demonstrate that the centrality-lethality rule, which implies that higher-degree proteins are more likely to be essential, may be misleading, as protein abundance measurements identify essential proteins to be more prevalent than nonessential proteins. In fact, we generally find that when there is a degree/abundance correlation, the degree distributions of nonessential and essential proteins are also disparate. Conversely, when there is no degree/abundance correlation, the degree distributions of nonessential and essential proteins are not different. However, we show that essentiality manifests itself as a biological property in all of the yeast PPI networks investigated here via enrichments of interactions between essential proteins. These findings provide valuable insights
Sadeghian, H.; Koster, N. B.; van den Dool, T. C.
The main driver for Semiconductor and Bio-MEMS industries is decreasing the feature size, moving from the current state-of-the-art at 22 nm towards 10 nm node. Consequently smaller defects and particles become problematic due to size and number, thus inspecting and characterizing them are very challenging. Existing industrial metrology and inspection methods cannot fulfil the requirements for these smaller features. Scanning probe Microscopy (SPM) has the distinct advantage of being able to discern the atomic structure of the substrate. It can image the 3D topography, but also a variety of material, mechanical and chemical properties. Therefore SPM has been suggested as one of the technologies that can fulfil the future requirements in terms of resolution and accuracy, while being capable of resolving 3D futures. However, the throughput of the current state-of-the-art SPMs are extremely low, as compared to the high-volume manufacturing requirements. This paper presents the development of an architecture for a fully automated high throughput SPM, which can meet the requirements of future process metrology and inspection for 450 mm wafers. The targeted specifications of the concept are 1) inspecting more than 20 sites per wafer, 2) each site with dimension of about 10 × 10 μm2 (scalable to 100 × 100 μm2) and 3) with a throughput of more than 7 wafers per hour, or 70 wafers per hour with a coarse/fine scanning approach. The progress of the high throughput SPM development is discussed and the baseline design of the critical sub-modules and the research issues are presented.
Call, Douglas F.
There is great interest in studying exoelectrogenic microorganisms, but existing methods can require expensive electrochemical equipment and specialized reactors. We developed a simple system for conducting high throughput bioelectrochemical research using multiple inexpensive microbial electrolysis cells (MECs) built with commercially available materials and operated using a single power source. MECs were small crimp top serum bottles (5mL) with a graphite plate anode (92m 2/m 3) and a cathode of stainless steel (SS) mesh (86m 2/m 3), graphite plate, SS wire, or platinum wire. The highest volumetric current density (240A/m 3, applied potential of 0.7V) was obtained using a SS mesh cathode and a wastewater inoculum (acetate electron donor). Parallel operated MECs (single power source) did not lead to differences in performance compared to non-parallel operated MECs, which can allow for high throughput reactor operation (>1000 reactors) using a single power supply. The utility of this method for cultivating exoelectrogenic microorganisms was demonstrated through comparison of buffer effects on pure (Geobacter sulfurreducens and Geobacter metallireducens) and mixed cultures. Mixed cultures produced current densities equal to or higher than pure cultures in the different media, and current densities for all cultures were higher using a 50mM phosphate buffer than a 30mM bicarbonate buffer. Only the mixed culture was capable of sustained current generation with a 200mM phosphate buffer. These results demonstrate the usefulness of this inexpensive method for conducting in-depth examinations of pure and mixed exoelectrogenic cultures. © 2011 Elsevier B.V.
Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D
While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.
Full Text Available While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor. In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.
Allmendinger, Andrea; Dieu, Le-Ha; Fischer, Stefan; Mueller, Robert; Mahler, Hanns-Christian; Huwyler, Jörg
Viscosity characterization of protein formulations is of utmost importance for the development of subcutaneously administered formulations. However, viscosity determinations are time-consuming and require large sample volumes in the range of hundreds of microliters to a few milliliters, depending on the method used. In this article, an automated, high-throughput method is described to determine dynamic viscosity of Newtonian fluids using standard capillary electrophoresis (CE) equipment. CE is an analytical method routinely used for the separation and characterization of proteins. In our set-up, the capillary is filled with the test sample, and a constant pressure is applied. A small aliquot of riboflavin is subsequently loaded into the capillary and used as a dye to monitor movement of protein samples. Migration time of the riboflavin peak moving through the filled capillary is converted to the viscosity by applying the Hagen-Poiseuille's law. The instrument is operated without using an electrical field. Repeatability, robustness, linearity, and reproducibility were demonstrated for different capillary lots and instruments, as well as for different capillary lengths and diameters. Accuracy was verified by comparing the viscosity data obtained by CE instrumentation with those obtained by plate/cone rheometry. The suitability of the method for protein formulations was demonstrated, and limitations were discussed. Typical viscosities in the range of 5-40mPas were reliably measured with this method. Advantages of the CE instrumentation-based method included short measurement times (1-15min), small sample volumes (few microliters) for a capillary with a diameter of 50μm and a length of 20.5cm as well as potential to be suitable for high-throughput measurements. Copyright © 2014 Elsevier B.V. All rights reserved.
Batista, Michel; Marchini, Fabricio K; Celedon, Paola A F; Fragoso, Stenio P; Probst, Christian M; Preti, Henrique; Ozaki, Luiz S; Buck, Gregory A; Goldenberg, Samuel; Krieger, Marco A
The three trypanosomatids pathogenic to men, Trypanosoma cruzi, Trypanosoma brucei and Leishmania major, are etiological agents of Chagas disease, African sleeping sickness and cutaneous leishmaniasis, respectively. The complete sequencing of these trypanosomatid genomes represented a breakthrough in the understanding of these organisms. Genome sequencing is a step towards solving the parasite biology puzzle, as there are a high percentage of genes encoding proteins without functional annotation. Also, technical limitations in protein expression in heterologous systems reinforce the evident need for the development of a high-throughput reverse genetics platform. Ideally, such platform would lead to efficient cloning and compatibility with various approaches. Thus, we aimed to construct a highly efficient cloning platform compatible with plasmid vectors that are suitable for various approaches. We constructed a platform with a flexible structure allowing the exchange of various elements, such as promoters, fusion tags, intergenic regions or resistance markers. This platform is based on Gateway® technology, to ensure a fast and efficient cloning system. We obtained plasmid vectors carrying genes for fluorescent proteins (green, cyan or yellow), and sequences for the c-myc epitope, and tandem affinity purification or polyhistidine tags. The vectors were verified by successful subcellular localization of two previously characterized proteins (TcRab7 and PAR 2) and a putative centrin. For the tandem affinity purification tag, the purification of two protein complexes (ribosome and proteasome) was performed. We constructed plasmids with an efficient cloning system and suitable for use across various applications, such as protein localization and co-localization, protein partner identification and protein expression. This platform also allows vector customization, as the vectors were constructed to enable easy exchange of its elements. The development of this high-throughput
Full Text Available While recently developed short-read sequencing technologies may dramatically reduce the sequencing cost and eventually achieve the $1000 goal for re-sequencing, their limitations prevent the de novo sequencing of eukaryotic genomes with the standard shotgun sequencing protocol. We present SHRAP (SHort Read Assembly Protocol, a sequencing protocol and assembly methodology that utilizes high-throughput short-read technologies. We describe a variation on hierarchical sequencing with two crucial differences: (1 we select a clone library from the genome randomly rather than as a tiling path and (2 we sample clones from the genome at high coverage and reads from the clones at low coverage. We assume that 200 bp read lengths with a 1% error rate and inexpensive random fragment cloning on whole mammalian genomes is feasible. Our assembly methodology is based on first ordering the clones and subsequently performing read assembly in three stages: (1 local assemblies of regions significantly smaller than a clone size, (2 clone-sized assemblies of the results of stage 1, and (3 chromosome-sized assemblies. By aggressively localizing the assembly problem during the first stage, our method succeeds in assembling short, unpaired reads sampled from repetitive genomes. We tested our assembler using simulated reads from D. melanogaster and human chromosomes 1, 11, and 21, and produced assemblies with large sets of contiguous sequence and a misassembly rate comparable to other draft assemblies. Tested on D. melanogaster and the entire human genome, our clone-ordering method produces accurate maps, thereby localizing fragment assembly and enabling the parallelization of the subsequent steps of our pipeline. Thus, we have demonstrated that truly inexpensive de novo sequencing of mammalian genomes will soon be possible with high-throughput, short-read technologies using our methodology.
Ozaki Luiz S
Full Text Available Abstract Background The three trypanosomatids pathogenic to men, Trypanosoma cruzi, Trypanosoma brucei and Leishmania major, are etiological agents of Chagas disease, African sleeping sickness and cutaneous leishmaniasis, respectively. The complete sequencing of these trypanosomatid genomes represented a breakthrough in the understanding of these organisms. Genome sequencing is a step towards solving the parasite biology puzzle, as there are a high percentage of genes encoding proteins without functional annotation. Also, technical limitations in protein expression in heterologous systems reinforce the evident need for the development of a high-throughput reverse genetics platform. Ideally, such platform would lead to efficient cloning and compatibility with various approaches. Thus, we aimed to construct a highly efficient cloning platform compatible with plasmid vectors that are suitable for various approaches. Results We constructed a platform with a flexible structure allowing the exchange of various elements, such as promoters, fusion tags, intergenic regions or resistance markers. This platform is based on Gateway® technology, to ensure a fast and efficient cloning system. We obtained plasmid vectors carrying genes for fluorescent proteins (green, cyan or yellow, and sequences for the c-myc epitope, and tandem affinity purification or polyhistidine tags. The vectors were verified by successful subcellular localization of two previously characterized proteins (TcRab7 and PAR 2 and a putative centrin. For the tandem affinity purification tag, the purification of two protein complexes (ribosome and proteasome was performed. Conclusions We constructed plasmids with an efficient cloning system and suitable for use across various applications, such as protein localization and co-localization, protein partner identification and protein expression. This platform also allows vector customization, as the vectors were constructed to enable easy
Trout-Yakel Keri M
Full Text Available Abstract Background A large, multi-province outbreak of listeriosis associated with ready-to-eat meat products contaminated with Listeria monocytogenes serotype 1/2a occurred in Canada in 2008. Subtyping of outbreak-associated isolates using pulsed-field gel electrophoresis (PFGE revealed two similar but distinct AscI PFGE patterns. High-throughput pyrosequencing of two L. monocytogenes isolates was used to rapidly provide the genome sequence of the primary outbreak strain and to investigate the extent of genetic diversity associated with a change of a single restriction enzyme fragment during PFGE. Results The chromosomes were collinear, but differences included 28 single nucleotide polymorphisms (SNPs and three indels, including a 33 kbp prophage that accounted for the observed difference in AscI PFGE patterns. The distribution of these traits was assessed within further clinical, environmental and food isolates associated with the outbreak, and this comparison indicated that three distinct, but highly related strains may have been involved in this nationwide outbreak. Notably, these two isolates were found to harbor a 50 kbp putative mobile genomic island encoding translocation and efflux functions that has not been observed in other Listeria genomes. Conclusions High-throughput genome sequencing provided a more detailed real-time assessment of genetic traits characteristic of the outbreak strains than could be achieved with routine subtyping methods. This study confirms that the latest generation of DNA sequencing technologies can be applied during high priority public health events, and laboratories need to prepare for this inevitability and assess how to properly analyze and interpret whole genome sequences in the context of molecular epidemiology.
Diaz, P.I.; Dupuy, A.K.; Abusleme, L.; Reese, B.; Obergfell, C.; Choquette, L.; Dongari-Bagtzoglou, A.; Peterson, D.E.; Terzi, E.; Strausbaugh, L.D.
Summary High throughput sequencing of 16S ribosomal RNA gene amplicons is a cost-effective method for characterization of oral bacterial communities. However, before undertaking large-scale studies, it is necessary to understand the technique-associated limitations and intrinsic variability of the oral ecosystem. In this work we evaluated bias in species representation using an in vitro-assembled mock community of oral bacteria. We then characterized the bacterial communities in saliva and buccal mucosa of five healthy subjects to investigate the power of high throughput sequencing in revealing their diversity and biogeography patterns. Mock community analysis showed primer and DNA isolation biases and an overestimation of diversity that was reduced after eliminating singleton operational taxonomic units (OTUs). Sequencing of salivary and mucosal communities found a total of 455 OTUs (0.3% dissimilarity) with only 78 of these present in all subjects. We demonstrate that this variability was partly the result of incomplete richness coverage even at great sequencing depths, and so comparing communities by their structure was more effective than comparisons based solely on membership. With respect to oral biogeography, we found inter-subject variability in community structure was lower than site differences between salivary and mucosal communities within subjects. These differences were evident at very low sequencing depths and were mostly caused by the abundance of Streptococcus mitis and Gemella haemolysans in mucosa. In summary, we present an experimental and data analysis framework that will facilitate design and interpretation of pyrosequencing-based studies. Despite challenges associated with this technique, we demonstrate its power for evaluation of oral diversity and biogeography patterns. PMID:22520388
Full Text Available The increasing prevalence of N. gonorrhoeae strains exhibiting decreased susceptibility to third-generation cephalosporins and the recent isolation of two distinct strains with high-level resistance to cefixime or ceftriaxone heralds the possible demise of β-lactam antibiotics as effective treatments for gonorrhea. To identify new compounds that inhibit penicillin-binding proteins (PBPs, which are proven targets for β-lactam antibiotics, we developed a high-throughput assay that uses fluorescence polarization (FP to distinguish the fluorescent penicillin, Bocillin-FL, in free or PBP-bound form. This assay was used to screen a 50,000 compound library for potential inhibitors of N. gonorrhoeae PBP 2, and 32 compounds were identified that exhibited >50% inhibition of Bocillin-FL binding to PBP 2. These included a cephalosporin that provided validation of the assay. After elimination of compounds that failed to exhibit concentration-dependent inhibition, the antimicrobial activity of the remaining 24 was tested. Of these, 7 showed antimicrobial activity against susceptible and penicillin- or cephalosporin-resistant strains of N. gonorrhoeae. In molecular docking simulations using the crystal structure of PBP 2, two of these inhibitors docked into the active site of the enzyme and each mediate interactions with the active site serine nucleophile. This study demonstrates the validity of a FP-based assay to find novel inhibitors of PBPs and paves the way for more comprehensive high-throughput screening against highly resistant strains of N. gonorrhoeae. It also provides a set of lead compounds for optimization of anti-gonococcal agents.
Jakob D Wikstrom
Full Text Available The pancreatic beta cell is unique in its response to nutrient by increased fuel oxidation. Recent studies have demonstrated that oxygen consumption rate (OCR may be a valuable predictor of islet quality and long term nutrient responsiveness. To date, high-throughput and user-friendly assays for islet respiration are lacking. The aim of this study was to develop such an assay and to examine bioenergetic efficiency of rodent and human islets.The XF24 respirometer platform was adapted to islets by the development of a 24-well plate specifically designed to confine islets. The islet plate generated data with low inter-well variability and enabled stable measurement of oxygen consumption for hours. The F1F0 ATP synthase blocker oligomycin was used to assess uncoupling while rotenone together with myxothiazol/antimycin was used to measure the level of non-mitochondrial respiration. The use of oligomycin in islets was validated by reversing its effect in the presence of the uncoupler FCCP. Respiratory leak averaged to 59% and 49% of basal OCR in islets from C57Bl6/J and FVB/N mice, respectively. In comparison, respiratory leak of INS-1 cells and C2C12 myotubes was measured to 38% and 23% respectively. Islets from a cohort of human donors showed a respiratory leak of 38%, significantly lower than mouse islets.The assay for islet respiration presented here provides a novel tool that can be used to study islet mitochondrial function in a relatively high-throughput manner. The data obtained in this study shows that rodent islets are less bioenergetically efficient than human islets as well as INS1 cells.
Xu, Like; Ouyang, Weiying; Qian, Yanyun; Su, Chao; Su, Jianqiang; Chen, Hong
Antibiotic resistance genes (ARGs) are present in surface water and often cannot be completely eliminated by drinking water treatment plants (DWTPs). Improper elimination of the ARG-harboring microorganisms contaminates the water supply and would lead to animal and human disease. Therefore, it is of utmost importance to determine the most effective ways by which DWTPs can eliminate ARGs. Here, we tested water samples from two DWTPs and distribution systems and detected the presence of 285 ARGs, 8 transposases, and intI-1 by utilizing high-throughput qPCR. The prevalence of ARGs differed in the two DWTPs, one of which employed conventional water treatments while the other had advanced treatment processes. The relative abundance of ARGs increased significantly after the treatment with biological activated carbon (BAC), raising the number of detected ARGs from 76 to 150. Furthermore, the final chlorination step enhanced the relative abundance of ARGs in the finished water generated from both DWTPs. The total enrichment of ARGs varied from 6.4-to 109.2-fold in tap water compared to finished water, among which beta-lactam resistance genes displayed the highest enrichment. Six transposase genes were detected in tap water samples, with the transposase gene TnpA-04 showing the greatest enrichment (up to 124.9-fold). We observed significant positive correlations between ARGs and mobile genetic elements (MGEs) during the distribution systems, indicating that transposases and intI-1 may contribute to antibiotic resistance in drinking water. To our knowledge, this is the first study to investigate the diversity and abundance of ARGs in drinking water treatment systems utilizing high-throughput qPCR techniques in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
Thompson, Scott; Messick, Troy; Schultz, David C.; Reichman, Melvin; Lieberman, Paul M.
Latent infection with Epstein-Barr Virus (EBV) is a carcinogenic cofactor in several lymphoid and epithelial cell malignancies. At present, there are no small molecule inhibitors that specifically target EBV latent infection or latency-associated oncoproteins. EBNA1 is an EBV-encoded sequence-specific DNA-binding protein that is consistently expressed in EBV-associated tumors and required for stable maintenance of the viral genome in proliferating cells. EBNA1 is also thought to provide cell survival function in latently infected cells. In this work we describe the development of a biochemical high-throughput screening (HTS) method using a homogenous fluorescence polarization (FP) assay monitoring EBNA1 binding to its cognate DNA binding site. An FP-based counterscreen was developed using another EBV-encoded DNA binding protein, Zta, and its cognate DNA binding site. We demonstrate that EBNA1 binding to a fluorescent labeled DNA probe provides a robust assay with a Z-factor consistently greater than 0.6. A pilot screen of a small molecule library of ~14,000 compounds identified 3 structurally related molecules that selectively inhibit EBNA1, but not Zta. All three compounds had activity in a cell-based assay specific for the disruption of EBNA1 transcription repression function. One of the compounds was effective in reducing EBV genome copy number in Raji Burkitt lymphoma cells. These experiments provide a proof-of-concept that small molecule inhibitors of EBNA1 can be identified by biochemical high-throughput screening of compound libraries. Further screening in conjunction with medicinal chemistry optimization may provide a selective inhibitor of EBNA1 and EBV latent infection. PMID:20930215
Full Text Available Agrobacterium-mediated transformation of plants with T-DNA is used both to introduce transgenes and for mutagenesis. Conventional approaches used to identify the genomic location and the structure of the inserted T-DNA are laborious and high-throughput methods using next-generation sequencing are being developed to address these problems. Here, we present a cost-effective approach that uses sequence capture targeted to the T-DNA borders to select genomic DNA fragments containing T-DNA-genome junctions, followed by Illumina sequencing to determine the location and junction structure of T-DNA insertions. Multiple probes can be mixed so that transgenic lines transformed with different T-DNA types can be processed simultaneously, using a simple, index-based pooling approach. We also developed a simple bioinformatic tool to find sequence read pairs that span the junction between the genome and T-DNA or any foreign DNA. We analyzed 29 transgenic lines of Arabidopsis thaliana, each containing inserts from 4 different T-DNA vectors. We determined the location of T-DNA insertions in 22 lines, 4 of which carried multiple insertion sites. Additionally, our analysis uncovered a high frequency of unconventional and complex T-DNA insertions, highlighting the needs for high-throughput methods for T-DNA localization and structural characterization. Transgene insertion events have to be fully characterized prior to use as commercial products. Our method greatly facilitates the first step of this characterization of transgenic plants by providing an efficient screen for the selection of promising lines.