WorldWideScience

Sample records for computational proteomics high-throughput

  1. Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.

    Science.gov (United States)

    Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong

    2008-04-01

    The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.

  2. Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.

    Science.gov (United States)

    Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars

    2015-07-15

    Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. High-Throughput Cloning and Expression Library Creation for Functional Proteomics

    Science.gov (United States)

    Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua

    2013-01-01

    The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particular important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single gene experiments, creating the need for fast, flexible and reliable cloning systems. These collections of open reading frame (ORF) clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator™ DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP12). Details can be found at http://www.proteomicstutorials.org. PMID:23457047

  4. An Automated High Throughput Proteolysis and Desalting Platform for Quantitative Proteomic Analysis

    Directory of Open Access Journals (Sweden)

    Albert-Baskar Arul

    2013-06-01

    Full Text Available Proteomics for biomarker validation needs high throughput instrumentation to analyze huge set of clinical samples for quantitative and reproducible analysis at a minimum time without manual experimental errors. Sample preparation, a vital step in proteomics plays a major role in identification and quantification of proteins from biological samples. Tryptic digestion a major check point in sample preparation for mass spectrometry based proteomics needs to be more accurate with rapid processing time. The present study focuses on establishing a high throughput automated online system for proteolytic digestion and desalting of proteins from biological samples quantitatively and qualitatively in a reproducible manner. The present study compares online protein digestion and desalting of BSA with conventional off-line (in-solution method and validated for real time sample for reproducibility. Proteins were identified using SEQUEST data base search engine and the data were quantified using IDEALQ software. The present study shows that the online system capable of handling high throughput samples in 96 well formats carries out protein digestion and peptide desalting efficiently in a reproducible and quantitative manner. Label free quantification showed clear increase of peptide quantities with increase in concentration with much linearity compared to off line method. Hence we would like to suggest that inclusion of this online system in proteomic pipeline will be effective in quantification of proteins in comparative proteomics were the quantification is really very crucial.

  5. Machine learning in computational biology to accelerate high-throughput protein expression.

    Science.gov (United States)

    Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth

    2017-08-15

    The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. ebrunk@ucsd.edu or johanr@biotech.kth.se. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data.

    Science.gov (United States)

    Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O

    2015-08-25

    Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.

  7. A high-throughput sample preparation method for cellular proteomics using 96-well filter plates.

    Science.gov (United States)

    Switzar, Linda; van Angeren, Jordy; Pinkse, Martijn; Kool, Jeroen; Niessen, Wilfried M A

    2013-10-01

    A high-throughput sample preparation protocol based on the use of 96-well molecular weight cutoff (MWCO) filter plates was developed for shotgun proteomics of cell lysates. All sample preparation steps, including cell lysis, buffer exchange, protein denaturation, reduction, alkylation and proteolytic digestion are performed in a 96-well plate format, making the platform extremely well suited for processing large numbers of samples and directly compatible with functional assays for cellular proteomics. In addition, the usage of a single plate for all sample preparation steps following cell lysis reduces potential samples losses and allows for automation. The MWCO filter also enables sample concentration, thereby increasing the overall sensitivity, and implementation of washing steps involving organic solvents, for example, to remove cell membranes constituents. The optimized protocol allowed for higher throughput with improved sensitivity in terms of the number of identified cellular proteins when compared to an established protocol employing gel-filtration columns. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    Science.gov (United States)

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  9. Detection of dysregulated protein-association networks by high-throughput proteomics predicts cancer vulnerabilities.

    Science.gov (United States)

    Lapek, John D; Greninger, Patricia; Morris, Robert; Amzallag, Arnaud; Pruteanu-Malinici, Iulian; Benes, Cyril H; Haas, Wilhelm

    2017-10-01

    The formation of protein complexes and the co-regulation of the cellular concentrations of proteins are essential mechanisms for cellular signaling and for maintaining homeostasis. Here we use isobaric-labeling multiplexed proteomics to analyze protein co-regulation and show that this allows the identification of protein-protein associations with high accuracy. We apply this 'interactome mapping by high-throughput quantitative proteome analysis' (IMAHP) method to a panel of 41 breast cancer cell lines and show that deviations of the observed protein co-regulations in specific cell lines from the consensus network affects cellular fitness. Furthermore, these aberrant interactions serve as biomarkers that predict the drug sensitivity of cell lines in screens across 195 drugs. We expect that IMAHP can be broadly used to gain insight into how changing landscapes of protein-protein associations affect the phenotype of biological systems.

  10. High throughput and accurate serum proteome profiling by integrated sample preparation technology and single-run data independent mass spectrometry analysis.

    Science.gov (United States)

    Lin, Lin; Zheng, Jiaxin; Yu, Quan; Chen, Wendong; Xing, Jinchun; Chen, Chenxi; Tian, Ruijun

    2018-03-01

    Mass spectrometry (MS)-based serum proteome analysis is extremely challenging due to its high complexity and dynamic range of protein abundances. Developing high throughput and accurate serum proteomic profiling approach capable of analyzing large cohorts is urgently needed for biomarker discovery. Herein, we report a streamlined workflow for fast and accurate proteomic profiling from 1μL of blood serum. The workflow combined an integrated technique for highly sensitive and reproducible sample preparation and a new data-independent acquisition (DIA)-based MS method. Comparing with standard data dependent acquisition (DDA) approach, the optimized DIA method doubled the number of detected peptides and proteins with better reproducibility. Without protein immunodepletion and prefractionation, the single-run DIA analysis enables quantitative profiling of over 300 proteins with 50min gradient time. The quantified proteins span more than five orders of magnitude of abundance range and contain over 50 FDA-approved disease markers. The workflow allowed us to analyze 20 serum samples per day, with about 358 protein groups per sample being identified. A proof-of-concept study on renal cell carcinoma (RCC) serum samples confirmed the feasibility of the workflow for large scale serum proteomic profiling and disease-related biomarker discovery. Blood serum or plasma is the predominant specimen for clinical proteomic studies while the analysis is extremely challenging for its high complexity. Many efforts had been made in the past for serum proteomics for maximizing protein identifications, whereas few have been concerned with throughput and reproducibility. Here, we establish a rapid, robust and high reproducible DIA-based workflow for streamlined serum proteomic profiling from 1μL serum. The workflow doesn't need protein depletion and pre-fractionation, while still being able to detect disease-relevant proteins accurately. The workflow is promising in clinical application

  11. An Automated High Performance Capillary Liquid Chromatography Fourier Transform Ion Cyclotron Resonance Mass Spectrometer for High-Throughput Proteomics

    International Nuclear Information System (INIS)

    Belov, Mikhail E.; Anderson, Gordon A.; Wingerd, Mark A.; Udseth, Harold R.; Tang, Keqi; Prior, David C.; Swanson, Kenneth R.; Buschbach, Michael A.; Strittmatter, Eric F.; Moore, Ronald J.; Smith, Richard D.

    2004-01-01

    We report on a fully automated 9.4 tesla Fourier transform ion resonance cyclotron (FTICR) mass spectrometer coupled to reverse-phase chromatography for high-throughput proteomic studies. Modifications made to the front-end of a commercial FTICR instrument--a dual-ESI-emitter ion source; dual-channel electrodynamic ion funnel; and collisional-cooling, selection and accumulation quadrupoles--significantly improved the sensitivity, dynamic range and mass measurement accuracy of the mass spectrometer. A high-pressure capillary liquid chromatography (LC) system was incorporated with an autosampler that enabled 24 h/day operation. A novel method for accumulating ions in the ICR cell was also developed. Unattended operation of the instrument revealed the exceptional reproducibility (1-5% deviation in elution times for peptides from a bacterial proteome), repeatability (10-20% deviation in detected abundances for peptides from the same aliquot analyzed a few weeks apart) and robustness (high-throughput operation for 5 months without downtime) of the LC/FTICR system. When combined with modulated-ion-energy gated trapping, the internal calibration of FTICR mass spectra decreased dispersion of mass measurement errors for peptide identifications in conjunction with high resolution capillary LC separations to < 5 ppm over a dynamic range for each spectrum of 10 3

  12. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  13. High-throughput sperm differential proteomics suggests that epigenetic alterations contribute to failed assisted reproduction.

    Science.gov (United States)

    Azpiazu, Rubén; Amaral, Alexandra; Castillo, Judit; Estanyol, Josep Maria; Guimerà, Marta; Ballescà, Josep Lluís; Balasch, Juan; Oliva, Rafael

    2014-06-01

    Are there quantitative alterations in the proteome of normozoospermic sperm samples that are able to complete IVF but whose female partner does not achieve pregnancy? Normozoospermic sperm samples with different IVF outcomes (pregnancy versus no pregnancy) differed in the levels of at least 66 proteins. The analysis of the proteome of sperm samples with distinct fertilization capacity using low-throughput proteomic techniques resulted in the detection of a few differential proteins. Current high-throughput mass spectrometry approaches allow the identification and quantification of a substantially higher number of proteins. This was a case-control study including 31 men with normozoospermic sperm and their partners who underwent IVF with successful fertilization recruited between 2007 and 2008. Normozoospermic sperm samples from 15 men whose female partners did not achieve pregnancy after IVF (no pregnancy) and 16 men from couples that did achieve pregnancy after IVF (pregnancy) were included in this study. To perform the differential proteomic experiments, 10 no pregnancy samples and 10 pregnancy samples were separately pooled and subsequently used for tandem mass tags (TMT) protein labelling, sodium dodecyl sulphate-polyacrylamide gel electrophoresis, liquid chromatography tandem mass spectrometry (LC-MS/MS) identification and peak intensity relative protein quantification. Bioinformatic analyses were performed using UniProt Knowledgebase, DAVID and Reactome. Individual samples (n = 5 no pregnancy samples; n = 6 pregnancy samples) and aliquots from the above TMT pools were used for western blotting. By using TMT labelling and LC-MS/MS, we have detected 31 proteins present at lower abundance (ratio no pregnancy/pregnancy 1.5) in the no pregnancy group. Bioinformatic analyses showed that the proteins with differing abundance are involved in chromatin assembly and lipoprotein metabolism (P values Economia y Competividad; FEDER BFU 2009-07118 and PI13/00699) and

  14. Computational tools for high-throughput discovery in biology

    OpenAIRE

    Jones, Neil Christopher

    2007-01-01

    High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...

  15. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    Science.gov (United States)

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  16. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  17. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  18. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  19. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  20. A high-throughput, multi-channel photon-counting detector with picosecond timing

    CERN Document Server

    Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J

    2009-01-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchanne...

  1. High-throughput open source computational methods for genetics and genomics

    NARCIS (Netherlands)

    Prins, J.C.P.

    2015-01-01

    Biology is increasingly data driven by virtue of the development of high-throughput technologies, such as DNA and RNA sequencing. Computational biology and bioinformatics are scientific disciplines that cross-over between the disciplines of biology, informatics and statistics; which is clearly

  2. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  3. High throughput proteomic analysis of the secretome in an explant model of articular cartilage inflammation

    Science.gov (United States)

    Clutterbuck, Abigail L.; Smith, Julia R.; Allaway, David; Harris, Pat; Liddell, Susan; Mobasheri, Ali

    2011-01-01

    This study employed a targeted high-throughput proteomic approach to identify the major proteins present in the secretome of articular cartilage. Explants from equine metacarpophalangeal joints were incubated alone or with interleukin-1beta (IL-1β, 10 ng/ml), with or without carprofen, a non-steroidal anti-inflammatory drug, for six days. After tryptic digestion of culture medium supernatants, resulting peptides were separated by HPLC and detected in a Bruker amaZon ion trap instrument. The five most abundant peptides in each MS scan were fragmented and the fragmentation patterns compared to mammalian entries in the Swiss-Prot database, using the Mascot search engine. Tryptic peptides originating from aggrecan core protein, cartilage oligomeric matrix protein (COMP), fibronectin, fibromodulin, thrombospondin-1 (TSP-1), clusterin (CLU), cartilage intermediate layer protein-1 (CILP-1), chondroadherin (CHAD) and matrix metalloproteinases MMP-1 and MMP-3 were detected. Quantitative western blotting confirmed the presence of CILP-1, CLU, MMP-1, MMP-3 and TSP-1. Treatment with IL-1β increased MMP-1, MMP-3 and TSP-1 and decreased the CLU precursor but did not affect CILP-1 and CLU levels. Many of the proteins identified have well-established extracellular matrix functions and are involved in early repair/stress responses in cartilage. This high throughput approach may be used to study the changes that occur in the early stages of osteoarthritis. PMID:21354348

  4. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  5. High throughput production of mouse monoclonal antibodies using antigen microarrays

    DEFF Research Database (Denmark)

    De Masi, Federico; Chiarella, P.; Wilhelm, H.

    2005-01-01

    Recent advances in proteomics research underscore the increasing need for high-affinity monoclonal antibodies, which are still generated with lengthy, low-throughput antibody production techniques. Here we present a semi-automated, high-throughput method of hybridoma generation and identification....... Monoclonal antibodies were raised to different targets in single batch runs of 6-10 wk using multiplexed immunisations, automated fusion and cell-culture, and a novel antigen-coated microarray-screening assay. In a large-scale experiment, where eight mice were immunized with ten antigens each, we generated...

  6. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  7. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  8. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data

    DEFF Research Database (Denmark)

    Yang, Laurence; Tan, Justin; O'Brien, Edward J.

    2015-01-01

    based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma......Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood...... at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass...

  9. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  10. AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation

    Science.gov (United States)

    Zhang, S. H.; Zhang, R. F.

    2017-11-01

    The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated

  11. Quantitative description on structure-property relationships of Li-ion battery materials for high-throughput computations

    Science.gov (United States)

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-12-01

    Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.

  12. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  13. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  14. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  15. A high-throughput, multi-channel photon-counting detector with picosecond timing

    Science.gov (United States)

    Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.

    2009-06-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.

  16. A high-throughput, multi-channel photon-counting detector with picosecond timing

    International Nuclear Information System (INIS)

    Lapington, J.S.; Fraser, G.W.; Miller, G.M.; Ashton, T.J.R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.

    2009-01-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.

  17. CrossCheck: an open-source web tool for high-throughput screen data analysis.

    Science.gov (United States)

    Najafov, Jamil; Najafov, Ayaz

    2017-07-19

    Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.

  18. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    Science.gov (United States)

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots

  19. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  20. High-throughput computational search for strengthening precipitates in alloys

    International Nuclear Information System (INIS)

    Kirklin, S.; Saal, James E.; Hegde, Vinay I.; Wolverton, C.

    2016-01-01

    The search for high-strength alloys and precipitation hardened systems has largely been accomplished through Edisonian trial and error experimentation. Here, we present a novel strategy using high-throughput computational approaches to search for promising precipitate/alloy systems. We perform density functional theory (DFT) calculations of an extremely large space of ∼200,000 potential compounds in search of effective strengthening precipitates for a variety of different alloy matrices, e.g., Fe, Al, Mg, Ni, Co, and Ti. Our search strategy involves screening phases that are likely to produce coherent precipitates (based on small lattice mismatch) and are composed of relatively common alloying elements. When combined with the Open Quantum Materials Database (OQMD), we can computationally screen for precipitates that either have a stable two-phase equilibrium with the host matrix, or are likely to precipitate as metastable phases. Our search produces (for the structure types considered) nearly all currently known high-strength precipitates in a variety of fcc, bcc, and hcp matrices, thus giving us confidence in the strategy. In addition, we predict a number of new, currently-unknown precipitate systems that should be explored experimentally as promising high-strength alloy chemistries.

  1. LC-MS/MS-based proteome profiling in Daphnia pulex and Daphnia longicephala: the Daphnia pulex genome database as a key for high throughput proteomics in Daphnia

    Directory of Open Access Journals (Sweden)

    Mayr Tobias

    2009-04-01

    Full Text Available Abstract Background Daphniids, commonly known as waterfleas, serve as important model systems for ecology, evolution and the environmental sciences. The sequencing and annotation of the Daphnia pulex genome both open future avenues of research on this model organism. As proteomics is not only essential to our understanding of cell function, and is also a powerful validation tool for predicted genes in genome annotation projects, a first proteomic dataset is presented in this article. Results A comprehensive set of 701,274 peptide tandem-mass-spectra, derived from Daphnia pulex, was generated, which lead to the identification of 531 proteins. To measure the impact of the Daphnia pulex filtered models database for mass spectrometry based Daphnia protein identification, this result was compared with results obtained with the Swiss-Prot and the Drosophila melanogaster database. To further validate the utility of the Daphnia pulex database for research on other Daphnia species, additional 407,778 peptide tandem-mass-spectra, obtained from Daphnia longicephala, were generated and evaluated, leading to the identification of 317 proteins. Conclusion Peptides identified in our approach provide the first experimental evidence for the translation of a broad variety of predicted coding regions within the Daphnia genome. Furthermore it could be demonstrated that identification of Daphnia longicephala proteins using the Daphnia pulex protein database is feasible but shows a slightly reduced identification rate. Data provided in this article clearly demonstrates that the Daphnia genome database is the key for mass spectrometry based high throughput proteomics in Daphnia.

  2. Reducing the cost of semi-automated in-gel tryptic digestion and GeLC sample preparation for high-throughput proteomics.

    Science.gov (United States)

    Ruelcke, Jayde E; Loo, Dorothy; Hill, Michelle M

    2016-10-21

    Peptide generation by trypsin digestion is typically the first step in mass spectrometry-based proteomics experiments, including 'bottom-up' discovery and targeted proteomics using multiple reaction monitoring. Manual tryptic digest and the subsequent clean-up steps can add variability even before the sample reaches the analytical platform. While specialized filter plates and tips have been designed for automated sample processing, the specialty reagents required may not be accessible or feasible due to their high cost. Here, we report a lower-cost semi-automated protocol for in-gel digestion and GeLC using standard 96-well microplates. Further cost savings were realized by re-using reagent tips with optimized sample ordering. To evaluate the methodology, we compared a simple mixture of 7 proteins and a complex cell-lysate sample. The results across three replicates showed that our semi-automated protocol had performance equal to or better than a manual in-gel digestion with respect to replicate variability and level of contamination. In this paper, we also provide the Agilent Bravo method file, which can be adapted to other liquid handlers. The simplicity, reproducibility, and cost-effectiveness of our semi-automated protocol make it ideal for routine in-gel and GeLC sample preparations, as well as high throughput processing of large clinical sample cohorts. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A Proteomic Workflow Using High-Throughput De Novo Sequencing Towards Complementation of Genome Information for Improved Comparative Crop Science.

    Science.gov (United States)

    Turetschek, Reinhard; Lyon, David; Desalegn, Getinet; Kaul, Hans-Peter; Wienkoop, Stefanie

    2016-01-01

    The proteomic study of non-model organisms, such as many crop plants, is challenging due to the lack of comprehensive genome information. Changing environmental conditions require the study and selection of adapted cultivars. Mutations, inherent to cultivars, hamper protein identification and thus considerably complicate the qualitative and quantitative comparison in large-scale systems biology approaches. With this workflow, cultivar-specific mutations are detected from high-throughput comparative MS analyses, by extracting sequence polymorphisms with de novo sequencing. Stringent criteria are suggested to filter for confidential mutations. Subsequently, these polymorphisms complement the initially used database, which is ready to use with any preferred database search algorithm. In our example, we thereby identified 26 specific mutations in two cultivars of Pisum sativum and achieved an increased number (17 %) of peptide spectrum matches.

  4. Morphology control in polymer blend fibers—a high throughput computing approach

    Science.gov (United States)

    Sesha Sarath Pokuri, Balaji; Ganapathysubramanian, Baskar

    2016-08-01

    Fibers made from polymer blends have conventionally enjoyed wide use, particularly in textiles. This wide applicability is primarily aided by the ease of manufacturing such fibers. More recently, the ability to tailor the internal morphology of polymer blend fibers by carefully designing processing conditions has enabled such fibers to be used in technologically relevant applications. Some examples include anisotropic insulating properties for heat and anisotropic wicking of moisture, coaxial morphologies for optical applications as well as fibers with high internal surface area for filtration and catalysis applications. However, identifying the appropriate processing conditions from the large space of possibilities using conventional trial-and-error approaches is a tedious and resource-intensive process. Here, we illustrate a high throughput computational approach to rapidly explore and characterize how processing conditions (specifically blend ratio and evaporation rates) affect the internal morphology of polymer blends during solvent based fabrication. We focus on a PS: PMMA system and identify two distinct classes of morphologies formed due to variations in the processing conditions. We subsequently map the processing conditions to the morphology class, thus constructing a ‘phase diagram’ that enables rapid identification of processing parameters for specific morphology class. We finally demonstrate the potential for time dependent processing conditions to get desired features of the morphology. This opens up the possibility of rational stage-wise design of processing pathways for tailored fiber morphology using high throughput computing.

  5. ExSTA: External Standard Addition Method for Accurate High-Throughput Quantitation in Targeted Proteomics Experiments.

    Science.gov (United States)

    Mohammed, Yassene; Pan, Jingxi; Zhang, Suping; Han, Jun; Borchers, Christoph H

    2018-03-01

    Targeted proteomics using MRM with stable-isotope-labeled internal-standard (SIS) peptides is the current method of choice for protein quantitation in complex biological matrices. Better quantitation can be achieved with the internal standard-addition method, where successive increments of synthesized natural form (NAT) of the endogenous analyte are added to each sample, a response curve is generated, and the endogenous concentration is determined at the x-intercept. Internal NAT-addition, however, requires multiple analyses of each sample, resulting in increased sample consumption and analysis time. To compare the following three methods, an MRM assay for 34 high-to-moderate abundance human plasma proteins is used: classical internal SIS-addition, internal NAT-addition, and external NAT-addition-generated in buffer using NAT and SIS peptides. Using endogenous-free chicken plasma, the accuracy is also evaluated. The internal NAT-addition outperforms the other two in precision and accuracy. However, the curves derived by internal vs. external NAT-addition differ by only ≈3.8% in slope, providing comparable accuracies and precision with good CV values. While the internal NAT-addition method may be "ideal", this new external NAT-addition can be used to determine the concentration of high-to-moderate abundance endogenous plasma proteins, providing a robust and cost-effective alternative for clinical analyses or other high-throughput applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Elucidation of the compatible interaction between banana and Meloidogyne incognita via high-throughput proteome profiling.

    Directory of Open Access Journals (Sweden)

    Aisyafaznim Al-Idrus

    Full Text Available With a diverse host range, Meloidogyne incognita (root-knot nematode is listed as one of the most economically important obligate parasites of agriculture. This nematode species establishes permanent feeding sites in plant root systems soon after infestation. A compatible host-nematode interaction triggers a cascade of morphological and physiological process disruptions of the host, leading to pathogenesis. Such disruption is reflected by altered gene expression in affected cells, detectable using molecular approaches. We employed a high-throughput proteomics approach to elucidate the events involved in a compatible banana- M. incognita interaction. This study serves as the first crucial step in developing natural banana resistance for the purpose of biological-based nematode management programme. We successfully profiled 114 Grand naine root proteins involved in the interaction with M. incognita at the 30th- and 60th- day after inoculation (dai. The abundance of proteins involved in fundamental biological processes, cellular component organisation and stress responses were significantly altered in inoculated root samples. In addition, the abundance of proteins in pathways associated with defence and giant cell maintenance in plants such as phenylpropanoid biosynthesis, glycolysis and citrate cycle were also implicated by the infestation.

  7. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    Science.gov (United States)

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  8. Bringing high-performance computing to the biologist's workbench: approaches, applications, and challenges

    International Nuclear Information System (INIS)

    Oehmen, C S; Cannon, W R

    2008-01-01

    Data-intensive and high-performance computing are poised to significantly impact the future of biological research which is increasingly driven by the prevalence of high-throughput experimental methodologies for genome sequencing, transcriptomics, proteomics, and other areas. Large centers such as NIH's National Center for Biotechnology Information, The Institute for Genomic Research, and the DOE's Joint Genome Institute) have made extensive use of multiprocessor architectures to deal with some of the challenges of processing, storing and curating exponentially growing genomic and proteomic datasets, thus enabling users to rapidly access a growing public data source, as well as use analysis tools transparently on high-performance computing resources. Applying this computational power to single-investigator analysis, however, often relies on users to provide their own computational resources, forcing them to endure the learning curve of porting, building, and running software on multiprocessor architectures. Solving the next generation of large-scale biology challenges using multiprocessor machines-from small clusters to emerging petascale machines-can most practically be realized if this learning curve can be minimized through a combination of workflow management, data management and resource allocation as well as intuitive interfaces and compatibility with existing common data formats

  9. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    Science.gov (United States)

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  10. High-throughput screening of cellulase F mutants from multiplexed plasmid sets using an automated plate assay on a functional proteomic robotic workcell

    Directory of Open Access Journals (Sweden)

    Qureshi Nasib

    2006-05-01

    Full Text Available Abstract Background The field of plasmid-based functional proteomics requires the rapid assay of proteins expressed from plasmid libraries. Automation is essential since large sets of mutant open reading frames are being cloned for evaluation. To date no integrated automated platform is available to carry out the entire process including production of plasmid libraries, expression of cloned genes, and functional testing of expressed proteins. Results We used a functional proteomic assay in a multiplexed setting on an integrated plasmid-based robotic workcell for high-throughput screening of mutants of cellulase F, an endoglucanase from the anaerobic fungus Orpinomyces PC-2. This allowed us to identify plasmids containing optimized clones expressing mutants with improved activity at lower pH. A plasmid library of mutagenized clones of the celF gene with targeted variations in the last four codons was constructed by site-directed PCR mutagenesis and transformed into Escherichia coli. A robotic picker integrated into the workcell was used to inoculate medium in a 96-well deep well plate, combining the transformants into a multiplexed set in each well, and the plate was incubated on the workcell. Plasmids were prepared from the multiplexed culture on the liquid handler component of the workcell and used for in vitro transcription/translation. The multiplexed expressed recombinant proteins were screened for improved activity and stability in an azo-carboxymethylcellulose plate assay. The multiplexed wells containing mutants with improved activity were identified and linked back to the corresponding multiplexed cultures stored in glycerol. Spread plates were prepared from the glycerol stocks and the workcell was used to pick single colonies from the spread plates, prepare plasmid, produce recombinant protein, and assay for activity. The screening assay and subsequent deconvolution of the multiplexed wells resulted in identification of improved Cel

  11. [New-generation high-throughput technologies based 'omics' research strategy in human disease].

    Science.gov (United States)

    Yang, Xu; Jiao, Rui; Yang, Lin; Wu, Li-Ping; Li, Ying-Rui; Wang, Jun

    2011-08-01

    In recent years, new-generation high-throughput technologies, including next-generation sequencing technology and mass spectrometry method, have been widely applied in solving biological problems, especially in human diseases field. This data driven, large-scale and industrialized research model enables the omnidirectional and multi-level study of human diseases from the perspectives of genomics, transcriptomics and proteomics levels, etc. In this paper, the latest development of the high-throughput technologies that applied in DNA, RNA, epigenomics, metagenomics including proteomics and some applications in translational medicine are reviewed. At genomics level, exome sequencing has been the hot spot of the recent research. However, the predominance of whole genome resequencing in detecting large structural variants within the whole genome level is coming to stand out as the drop of sequencing cost, which also makes it possible for personalized genome based medicine application. At trancriptomics level, e.g., small RNA sequencing can be used to detect known and predict unknown miRNA. Those small RNA could not only be the biomarkers for disease diagnosis and prognosis, but also show the potential of disease treatment. At proteomics level, e.g., target proteomics can be used to detect the possible disease-related protein or peptides, which can be useful index for clinical staging and typing. Furthermore, the application and development of trans-omics study in disease research are briefly introduced. By applying bioinformatics technologies for integrating multi-omics data, the mechanism, diagnosis and therapy of the disease are likely to be systemically explained and realized, so as to provide powerful tools for disease diagnosis and therapies.

  12. High-throughput proteomics : optical approaches.

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, George S.

    2008-09-01

    Realistic cell models could greatly accelerate our ability to engineer biochemical pathways and the production of valuable organic products, which would be of great use in the development of biofuels, pharmaceuticals, and the crops for the next green revolution. However, this level of engineering will require a great deal more knowledge about the mechanisms of life than is currently available. In particular, we need to understand the interactome (which proteins interact) as it is situated in the three dimensional geometry of the cell (i.e., a situated interactome), and the regulation/dynamics of these interactions. Methods for optical proteomics have become available that allow the monitoring and even disruption/control of interacting proteins in living cells. Here, a range of these methods is reviewed with respect to their role in elucidating the interactome and the relevant spatial localizations. Development of these technologies and their integration into the core competencies of research organizations can position whole institutions and teams of researchers to lead in both the fundamental science and the engineering applications of cellular biology. That leadership could be particularly important with respect to problems of national urgency centered around security, biofuels, and healthcare.

  13. High-throughput fractionation of human plasma for fast enrichment of low- and high-abundance proteins.

    Science.gov (United States)

    Breen, Lucas; Cao, Lulu; Eom, Kirsten; Srajer Gajdosik, Martina; Camara, Lila; Giacometti, Jasminka; Dupuy, Damian E; Josic, Djuro

    2012-05-01

    Fast, cost-effective and reproducible isolation of IgM from plasma is invaluable to the study of IgM and subsequent understanding of the human immune system. Additionally, vast amounts of information regarding human physiology and disease can be derived from analysis of the low abundance proteome of the plasma. In this study, methods were optimized for both the high-throughput isolation of IgM from human plasma, and the high-throughput isolation and fractionation of low abundance plasma proteins. To optimize the chromatographic isolation of IgM from human plasma, many variables were examined including chromatography resin, mobile phases, and order of chromatographic separations. Purification of IgM was achieved most successfully through isolation of immunoglobulin from human plasma using Protein A chromatography with a specific resin followed by subsequent fractionation using QA strong anion exchange chromatography. Through these optimization experiments, an additional method was established to prepare plasma for analysis of low abundance proteins. This method involved chromatographic depletion of high-abundance plasma proteins and reduction of plasma proteome complexity through further chromatographic fractionation. Purification of IgM was achieved with high purity as confirmed by SDS-PAGE and IgM-specific immunoblot. Isolation and fractionation of low abundance protein was also performed successfully, as confirmed by SDS-PAGE and mass spectrometry analysis followed by label-free quantitative spectral analysis. The level of purity of the isolated IgM allows for further IgM-specific analysis of plasma samples. The developed fractionation scheme can be used for high throughput screening of human plasma in order to identify low and high abundance proteins as potential prognostic and diagnostic disease biomarkers.

  14. Semen proteomics and male infertility.

    Science.gov (United States)

    Jodar, Meritxell; Soler-Ventura, Ada; Oliva, Rafael

    2017-06-06

    Semen is a complex body fluid containing an admixture of spermatozoa suspended in secretions from the testes and epididymis which are mixed at the time of ejaculation with secretions from other accessory sex glands such as the prostate and seminal vesicles. High-throughput technologies have revealed that, contrary to the idea that sperm cells are simply a silent delivery vehicle of the male genome to the oocyte, the sperm cells in fact provide both a specific epigenetically marked DNA together with a complex population of proteins and RNAs crucial for embryogenesis. Similarly, -omic technologies have also enlightened that seminal fluid seems to play a much greater role than simply being a medium to carry the spermatozoa through the female reproductive tract. In the present review, we briefly overview the sperm cell biology, consider the key issues in sperm and seminal fluid sample preparation for high-throughput proteomic studies, describe the current state of the sperm and seminal fluid proteomes generated by high-throughput proteomic technologies and provide new insights into the potential communication between sperm and seminal fluid. In addition, comparative proteomic studies open a window to explore the potential pathogenic mechanisms of infertility and the discovery of potential biomarkers with clinical significance. The review updates the numerous proteomics studies performed on semen, including spermatozoa and seminal fluid. In addition, an integrative analysis of the testes, sperm and seminal fluid proteomes is also included providing insights into the molecular mechanisms that regulate the generation, maturation and transit of spermatozoa. Furthermore, the compilation of several differential proteomic studies focused on male infertility reveals potential pathways disturbed in specific subtypes of male infertility and points out towards future research directions in the field. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. High-Throughput Proteomics Using High Efficiency Multiple-Capillary Liquid Chromatography With On-Line High-Performance ESI FTICR Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Yufeng (BATTELLE (PACIFIC NW LAB)); Tolic, Nikola (BATTELLE (PACIFIC NW LAB)); Zhao, Rui (ASSOC WESTERN UNIVERSITY); Pasa Tolic, Ljiljana (BATTELLE (PACIFIC NW LAB)); Li, Lingjun (Illinois Univ Of-Urbana/Champa); Berger, Scott J.(ASSOC WESTERN UNIVERSITY); Harkewicz, Richard (BATTELLE (PACIFIC NW LAB)); Anderson, Gordon A.(BATTELLE (PACIFIC NW LAB)); Belov, Mikhail E.(BATTELLE (PACIFIC NW LAB)); Smith, Richard D.(BATTELLE (PACIFIC NW LAB))

    2000-12-01

    We report on the design and application of a high-efficiency multiple-capillary liquid chromatography (LC) system for high-throughput proteome analysis. The multiple-capillary LC system was operated at the pressure of 10,000 psi using commercial LC pumps to deliver the mobile phase and newly developed passive feedback valves to switch the mobile phase flow and introduce samples. The multiple-capillary LC system was composed of several serially connected dual-capillary column devices. The dual-capillary column approach was designed to eliminate the time delay for regeneration (or equilibrium) of the capillary column after its use under the mobile phase gradient condition (i.e. one capillary column was used in separation and the other was washed using mobile phase A). The serially connected dual-capillary columns and ESI sources were operated independently, and could be used for either''backup'' operation or with other mass spectrometer(s). This high-efficiency multiple-capillary LC system uses switching valves for all operations and is highly amenable to automation. The separations efficiency of dual-capillary column device, optimal capillary dimensions (column length and packed particle size), suitable mobile phases for electrospray, and the capillary re-generation were investigated. A high magnetic field (11.5 tesla) Fourier transform ion cyclotron resonance (FTICR) mass spectrometer was coupled on-line with this high-efficiency multiple-capillary LC system through an electrospray ionization source. The capillary LC provided a peak capacity of {approx}600, and the 2-D capillary LC-FTICR provided a combined resolving power of > 6 x 10 7 polypeptide isotopic distributions. For yeast cellular tryptic digests, > 100,000 polypeptides were typically detected, and {approx}1,000 proteins can be characterized in a single run.

  16. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Gore, Brooklin

    2011-10-12

    This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

  17. High-Throughput Quantitative Proteomic Analysis of Dengue Virus Type 2 Infected A549 Cells

    Science.gov (United States)

    Chiu, Han-Chen; Hannemann, Holger; Heesom, Kate J.; Matthews, David A.; Davidson, Andrew D.

    2014-01-01

    Disease caused by dengue virus is a global health concern with up to 390 million individuals infected annually worldwide. There are no vaccines or antiviral compounds available to either prevent or treat dengue disease which may be fatal. To increase our understanding of the interaction of dengue virus with the host cell, we analyzed changes in the proteome of human A549 cells in response to dengue virus type 2 infection using stable isotope labelling in cell culture (SILAC) in combination with high-throughput mass spectrometry (MS). Mock and infected A549 cells were fractionated into nuclear and cytoplasmic extracts before analysis to identify proteins that redistribute between cellular compartments during infection and reduce the complexity of the analysis. We identified and quantified 3098 and 2115 proteins in the cytoplasmic and nuclear fractions respectively. Proteins that showed a significant alteration in amount during infection were examined using gene enrichment, pathway and network analysis tools. The analyses revealed that dengue virus infection modulated the amounts of proteins involved in the interferon and unfolded protein responses, lipid metabolism and the cell cycle. The SILAC-MS results were validated for a select number of proteins over a time course of infection by Western blotting and immunofluorescence microscopy. Our study demonstrates for the first time the power of SILAC-MS for identifying and quantifying novel changes in cellular protein amounts in response to dengue virus infection. PMID:24671231

  18. High-throughput quantitative proteomic analysis of dengue virus type 2 infected A549 cells.

    Directory of Open Access Journals (Sweden)

    Han-Chen Chiu

    Full Text Available Disease caused by dengue virus is a global health concern with up to 390 million individuals infected annually worldwide. There are no vaccines or antiviral compounds available to either prevent or treat dengue disease which may be fatal. To increase our understanding of the interaction of dengue virus with the host cell, we analyzed changes in the proteome of human A549 cells in response to dengue virus type 2 infection using stable isotope labelling in cell culture (SILAC in combination with high-throughput mass spectrometry (MS. Mock and infected A549 cells were fractionated into nuclear and cytoplasmic extracts before analysis to identify proteins that redistribute between cellular compartments during infection and reduce the complexity of the analysis. We identified and quantified 3098 and 2115 proteins in the cytoplasmic and nuclear fractions respectively. Proteins that showed a significant alteration in amount during infection were examined using gene enrichment, pathway and network analysis tools. The analyses revealed that dengue virus infection modulated the amounts of proteins involved in the interferon and unfolded protein responses, lipid metabolism and the cell cycle. The SILAC-MS results were validated for a select number of proteins over a time course of infection by Western blotting and immunofluorescence microscopy. Our study demonstrates for the first time the power of SILAC-MS for identifying and quantifying novel changes in cellular protein amounts in response to dengue virus infection.

  19. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    Science.gov (United States)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  20. Multiplex High-Throughput Targeted Proteomic Assay To Identify Induced Pluripotent Stem Cells.

    Science.gov (United States)

    Baud, Anna; Wessely, Frank; Mazzacuva, Francesca; McCormick, James; Camuzeaux, Stephane; Heywood, Wendy E; Little, Daniel; Vowles, Jane; Tuefferd, Marianne; Mosaku, Olukunbi; Lako, Majlinda; Armstrong, Lyle; Webber, Caleb; Cader, M Zameel; Peeters, Pieter; Gissen, Paul; Cowley, Sally A; Mills, Kevin

    2017-02-21

    Induced pluripotent stem cells have great potential as a human model system in regenerative medicine, disease modeling, and drug screening. However, their use in medical research is hampered by laborious reprogramming procedures that yield low numbers of induced pluripotent stem cells. For further applications in research, only the best, competent clones should be used. The standard assays for pluripotency are based on genomic approaches, which take up to 1 week to perform and incur significant cost. Therefore, there is a need for a rapid and cost-effective assay able to distinguish between pluripotent and nonpluripotent cells. Here, we describe a novel multiplexed, high-throughput, and sensitive peptide-based multiple reaction monitoring mass spectrometry assay, allowing for the identification and absolute quantitation of multiple core transcription factors and pluripotency markers. This assay provides simpler and high-throughput classification into either pluripotent or nonpluripotent cells in 7 min analysis while being more cost-effective than conventional genomic tests.

  1. Serial isoelectric focusing as an effective and economic way to obtain maximal resolution and high-throughput in 2D-based comparative proteomics of scarce samples: proof-of-principle.

    Science.gov (United States)

    Farhoud, Murtada H; Wessels, Hans J C T; Wevers, Ron A; van Engelen, Baziel G; van den Heuvel, Lambert P; Smeitink, Jan A

    2005-01-01

    In 2D-based comparative proteomics of scarce samples, such as limited patient material, established methods for prefractionation and subsequent use of different narrow range IPG strips to increase overall resolution are difficult to apply. Also, a high number of samples, a prerequisite for drawing meaningful conclusions when pathological and control samples are considered, will increase the associated amount of work almost exponentially. Here, we introduce a novel, effective, and economic method designed to obtain maximum 2D resolution while maintaining the high throughput necessary to perform large-scale comparative proteomics studies. The method is based on connecting different IPG strips serially head-to-tail so that a complete line of different IPG strips with sequential pH regions can be focused in the same experiment. We show that when 3 IPG strips (covering together the pH range of 3-11) are connected head-to-tail an optimal resolution is achieved along the whole pH range. Sample consumption, time required, and associated costs are reduced by almost 70%, and the workload is reduced significantly.

  2. Computational and statistical methods for high-throughput analysis of post-translational modifications of proteins

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter

    2015-01-01

    The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...

  3. Unparalleled sample treatment throughput for proteomics workflows relying on ultrasonic energy.

    Science.gov (United States)

    Jorge, Susana; Araújo, J E; Pimentel-Santos, F M; Branco, Jaime C; Santos, Hugo M; Lodeiro, Carlos; Capelo, J L

    2018-02-01

    We report on the new microplate horn ultrasonic device as a powerful tool to speed proteomics workflows with unparalleled throughput. 96 complex proteomes were digested at the same time in 4min. Variables such as ultrasonication time, ultrasonication amplitude, and protein to enzyme ratio were optimized. The "classic" method relying on overnight protein digestion (12h) and the sonoreactor-based method were also employed for comparative purposes. We found the protein digestion efficiency homogeneously distributed in the entire microplate horn surface using the following conditions: 4min sonication time and 25% amplitude. Using this approach, patients with lymphoma and myeloma were classified using principal component analysis and a 2D gel-mass spectrometry based approach. Furthermore, we demonstrate the excellent performance by using MALDI-mass spectrometry based profiling as a fast way to classify patients with rheumatoid arthritis, systemic lupus erythematosus, and ankylosing spondylitis. Finally, the speed and simplicity of this method were demonstrated by clustering 90 patients with knee osteoarthritis disease (30), with a prosthesis (30, control group) and healthy individuals (30) with no history of joint disease. Overall, the new approach allows profiling a disease in just one week while allows to match the minimalism rules as outlined by Halls. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Machine learning in computational biology to accelerate high-throughput protein expression

    DEFF Research Database (Denmark)

    Sastry, Anand; Monk, Jonathan M.; Tegel, Hanna

    2017-01-01

    and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide...... the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. Availability and implementation: We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template...

  5. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  6. A high-throughput surface plasmon resonance biosensor based on differential interferometric imaging

    International Nuclear Information System (INIS)

    Wang, Daqian; Ding, Lili; Zhang, Wei; Zhang, Enyao; Yu, Xinglong; Luo, Zhaofeng; Ou, Huichao

    2012-01-01

    A new high-throughput surface plasmon resonance (SPR) biosensor based on differential interferometric imaging is reported. The two SPR interferograms of the sensing surface are imaged on two CCD cameras. The phase difference between the two interferograms is 180°. The refractive index related factor (RIRF) of the sensing surface is calculated from the two simultaneously acquired interferograms. The simulation results indicate that the RIRF exhibits a linear relationship with the refractive index of the sensing surface and is unaffected by the noise, drift and intensity distribution of the light source. The affinity and kinetic information can be extracted in real time from continuously acquired RIRF distributions. The results of refractometry experiments show that the dynamic detection range of SPR differential interferometric imaging system can be over 0.015 refractive index unit (RIU). High refractive index resolution is down to 0.45 RU (1 RU = 1 × 10 −6 RIU). Imaging and protein microarray experiments demonstrate the ability of high-throughput detection. The aptamer experiments demonstrate that the SPR sensor based on differential interferometric imaging has a great capability to be implemented for high-throughput aptamer kinetic evaluation. These results suggest that this biosensor has the potential to be utilized in proteomics and drug discovery after further improvement. (paper)

  7. Streptococcus mutans Protein Synthesis during Mixed-Species Biofilm Development by High-Throughput Quantitative Proteomics

    Science.gov (United States)

    Klein, Marlise I.; Xiao, Jin; Lu, Bingwen; Delahunty, Claire M.; Yates, John R.; Koo, Hyun

    2012-01-01

    Biofilms formed on tooth surfaces are comprised of mixed microbiota enmeshed in an extracellular matrix. Oral biofilms are constantly exposed to environmental changes, which influence the microbial composition, matrix formation and expression of virulence. Streptococcus mutans and sucrose are key modulators associated with the evolution of virulent-cariogenic biofilms. In this study, we used a high-throughput quantitative proteomics approach to examine how S. mutans produces relevant proteins that facilitate its establishment and optimal survival during mixed-species biofilms development induced by sucrose. Biofilms of S. mutans, alone or mixed with Actinomyces naeslundii and Streptococcus oralis, were initially formed onto saliva-coated hydroxyapatite surface under carbohydrate-limiting condition. Sucrose (1%, w/v) was then introduced to cause environmental changes, and to induce biofilm accumulation. Multidimensional protein identification technology (MudPIT) approach detected up to 60% of proteins encoded by S. mutans within biofilms. Specific proteins associated with exopolysaccharide matrix assembly, metabolic and stress adaptation processes were highly abundant as the biofilm transit from earlier to later developmental stages following sucrose introduction. Our results indicate that S. mutans within a mixed-species biofilm community increases the expression of specific genes associated with glucan synthesis and remodeling (gtfBC, dexA) and glucan-binding (gbpB) during this transition (Pmutans up-regulates specific adaptation mechanisms to cope with acidic environments (F1F0-ATPase system, fatty acid biosynthesis, branched chain amino acids metabolism), and molecular chaperones (GroEL). Interestingly, the protein levels and gene expression are in general augmented when S. mutans form mixed-species biofilms (vs. single-species biofilms) demonstrating fundamental differences in the matrix assembly, survival and biofilm maintenance in the presence of other

  8. Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)

    Science.gov (United States)

    Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...

  9. Design and initial characterization of the SC-200 proteomics standard mixture.

    Science.gov (United States)

    Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald; Kolker, Eugene

    2011-01-01

    High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.

  10. ATAQS: A computational software tool for high throughput transition optimization and validation for selected reaction monitoring mass spectrometry

    Directory of Open Access Journals (Sweden)

    Ramos Hector

    2011-03-01

    Full Text Available Abstract Background Since its inception, proteomics has essentially operated in a discovery mode with the goal of identifying and quantifying the maximal number of proteins in a sample. Increasingly, proteomic measurements are also supporting hypothesis-driven studies, in which a predetermined set of proteins is consistently detected and quantified in multiple samples. Selected reaction monitoring (SRM is a targeted mass spectrometric technique that supports the detection and quantification of specific proteins in complex samples at high sensitivity and reproducibility. Here, we describe ATAQS, an integrated software platform that supports all stages of targeted, SRM-based proteomics experiments including target selection, transition optimization and post acquisition data analysis. This software will significantly facilitate the use of targeted proteomic techniques and contribute to the generation of highly sensitive, reproducible and complete datasets that are particularly critical for the discovery and validation of targets in hypothesis-driven studies in systems biology. Result We introduce a new open source software pipeline, ATAQS (Automated and Targeted Analysis with Quantitative SRM, which consists of a number of modules that collectively support the SRM assay development workflow for targeted proteomic experiments (project management and generation of protein, peptide and transitions and the validation of peptide detection by SRM. ATAQS provides a flexible pipeline for end-users by allowing the workflow to start or end at any point of the pipeline, and for computational biologists, by enabling the easy extension of java algorithm classes for their own algorithm plug-in or connection via an external web site. This integrated system supports all steps in a SRM-based experiment and provides a user-friendly GUI that can be run by any operating system that allows the installation of the Mozilla Firefox web browser. Conclusions Targeted

  11. High-throughput measurement of rice tillers using a conveyor equipped with x-ray computed tomography

    Science.gov (United States)

    Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian

    2011-02-01

    Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.

  12. Extraction of drainage networks from large terrain datasets using high throughput computing

    Science.gov (United States)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  13. High-throughput theoretical design of lithium battery materials

    International Nuclear Information System (INIS)

    Ling Shi-Gang; Gao Jian; Xiao Rui-Juan; Chen Li-Quan

    2016-01-01

    The rapid evolution of high-throughput theoretical design schemes to discover new lithium battery materials is reviewed, including high-capacity cathodes, low-strain cathodes, anodes, solid state electrolytes, and electrolyte additives. With the development of efficient theoretical methods and inexpensive computers, high-throughput theoretical calculations have played an increasingly important role in the discovery of new materials. With the help of automatic simulation flow, many types of materials can be screened, optimized and designed from a structural database according to specific search criteria. In advanced cell technology, new materials for next generation lithium batteries are of great significance to achieve performance, and some representative criteria are: higher energy density, better safety, and faster charge/discharge speed. (topical review)

  14. High-throughput GPU-based LDPC decoding

    Science.gov (United States)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  15. The impact of computer science in molecular medicine: enabling high-throughput research.

    Science.gov (United States)

    de la Iglesia, Diana; García-Remesal, Miguel; de la Calle, Guillermo; Kulikowski, Casimir; Sanz, Ferran; Maojo, Víctor

    2013-01-01

    The Human Genome Project and the explosion of high-throughput data have transformed the areas of molecular and personalized medicine, which are producing a wide range of studies and experimental results and providing new insights for developing medical applications. Research in many interdisciplinary fields is resulting in data repositories and computational tools that support a wide diversity of tasks: genome sequencing, genome-wide association studies, analysis of genotype-phenotype interactions, drug toxicity and side effects assessment, prediction of protein interactions and diseases, development of computational models, biomarker discovery, and many others. The authors of the present paper have developed several inventories covering tools, initiatives and studies in different computational fields related to molecular medicine: medical informatics, bioinformatics, clinical informatics and nanoinformatics. With these inventories, created by mining the scientific literature, we have carried out several reviews of these fields, providing researchers with a useful framework to locate, discover, search and integrate resources. In this paper we present an analysis of the state-of-the-art as it relates to computational resources for molecular medicine, based on results compiled in our inventories, as well as results extracted from a systematic review of the literature and other scientific media. The present review is based on the impact of their related publications and the available data and software resources for molecular medicine. It aims to provide information that can be useful to support ongoing research and work to improve diagnostics and therapeutics based on molecular-level insights.

  16. High-throughput bioinformatics with the Cyrille2 pipeline system

    Directory of Open Access Journals (Sweden)

    de Groot Joost CW

    2008-02-01

    Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.

  17. Systems biology of bacterial nitrogen fixation: High-throughput technology and its integrative description with constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Resendis-Antonio Osbaldo

    2011-07-01

    Full Text Available Abstract Background Bacterial nitrogen fixation is the biological process by which atmospheric nitrogen is uptaken by bacteroids located in plant root nodules and converted into ammonium through the enzymatic activity of nitrogenase. In practice, this biological process serves as a natural form of fertilization and its optimization has significant implications in sustainable agricultural programs. Currently, the advent of high-throughput technology supplies with valuable data that contribute to understanding the metabolic activity during bacterial nitrogen fixation. This undertaking is not trivial, and the development of computational methods useful in accomplishing an integrative, descriptive and predictive framework is a crucial issue to decoding the principles that regulated the metabolic activity of this biological process. Results In this work we present a systems biology description of the metabolic activity in bacterial nitrogen fixation. This was accomplished by an integrative analysis involving high-throughput data and constraint-based modeling to characterize the metabolic activity in Rhizobium etli bacteroids located at the root nodules of Phaseolus vulgaris (bean plant. Proteome and transcriptome technologies led us to identify 415 proteins and 689 up-regulated genes that orchestrate this biological process. Taking into account these data, we: 1 extended the metabolic reconstruction reported for R. etli; 2 simulated the metabolic activity during symbiotic nitrogen fixation; and 3 evaluated the in silico results in terms of bacteria phenotype. Notably, constraint-based modeling simulated nitrogen fixation activity in such a way that 76.83% of the enzymes and 69.48% of the genes were experimentally justified. Finally, to further assess the predictive scope of the computational model, gene deletion analysis was carried out on nine metabolic enzymes. Our model concluded that an altered metabolic activity on these enzymes induced

  18. A high-throughput pipeline for the design of real-time PCR signatures

    Directory of Open Access Journals (Sweden)

    Reifman Jaques

    2010-06-01

    Full Text Available Abstract Background Pathogen diagnostic assays based on polymerase chain reaction (PCR technology provide high sensitivity and specificity. However, the design of these diagnostic assays is computationally intensive, requiring high-throughput methods to identify unique PCR signatures in the presence of an ever increasing availability of sequenced genomes. Results We present the Tool for PCR Signature Identification (TOPSI, a high-performance computing pipeline for the design of PCR-based pathogen diagnostic assays. The TOPSI pipeline efficiently designs PCR signatures common to multiple bacterial genomes by obtaining the shared regions through pairwise alignments between the input genomes. TOPSI successfully designed PCR signatures common to 18 Staphylococcus aureus genomes in less than 14 hours using 98 cores on a high-performance computing system. Conclusions TOPSI is a computationally efficient, fully integrated tool for high-throughput design of PCR signatures common to multiple bacterial genomes. TOPSI is freely available for download at http://www.bhsai.org/downloads/topsi.tar.gz.

  19. Computational biology for ageing

    Science.gov (United States)

    Wieser, Daniela; Papatheodorou, Irene; Ziehm, Matthias; Thornton, Janet M.

    2011-01-01

    High-throughput genomic and proteomic technologies have generated a wealth of publicly available data on ageing. Easy access to these data, and their computational analysis, is of great importance in order to pinpoint the causes and effects of ageing. Here, we provide a description of the existing databases and computational tools on ageing that are available for researchers. We also describe the computational approaches to data interpretation in the field of ageing including gene expression, comparative and pathway analyses, and highlight the challenges for future developments. We review recent biological insights gained from applying bioinformatics methods to analyse and interpret ageing data in different organisms, tissues and conditions. PMID:21115530

  20. The amino acid's backup bone - storage solutions for proteomics facilities.

    Science.gov (United States)

    Meckel, Hagen; Stephan, Christian; Bunse, Christian; Krafzik, Michael; Reher, Christopher; Kohl, Michael; Meyer, Helmut Erich; Eisenacher, Martin

    2014-01-01

    Proteomics methods, especially high-throughput mass spectrometry analysis have been continually developed and improved over the years. The analysis of complex biological samples produces large volumes of raw data. Data storage and recovery management pose substantial challenges to biomedical or proteomic facilities regarding backup and archiving concepts as well as hardware requirements. In this article we describe differences between the terms backup and archive with regard to manual and automatic approaches. We also introduce different storage concepts and technologies from transportable media to professional solutions such as redundant array of independent disks (RAID) systems, network attached storages (NAS) and storage area network (SAN). Moreover, we present a software solution, which we developed for the purpose of long-term preservation of large mass spectrometry raw data files on an object storage device (OSD) archiving system. Finally, advantages, disadvantages, and experiences from routine operations of the presented concepts and technologies are evaluated and discussed. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.

  1. High-Throughput Quantification of Nanoparticle Degradation Using Computational Microscopy and Its Application to Drug Delivery Nanocapsules

    KAUST Repository

    Ray, Aniruddha

    2017-04-25

    Design and synthesis of degradable nanoparticles are very important in drug delivery and biosensing fields. Although accurate assessment of nanoparticle degradation rate would improve the characterization and optimization of drug delivery vehicles, current methods rely on estimating the size of the particles at discrete points over time using, for example, electron microscopy or dynamic light scattering (DLS), among other techniques, all of which have drawbacks and practical limitations. There is a significant need for a high-throughput and cost-effective technology to accurately monitor nanoparticle degradation as a function of time and using small amounts of sample. To address this need, here we present two different computational imaging-based methods for monitoring and quantification of nanoparticle degradation. The first method is suitable for discrete testing, where a computational holographic microscope is designed to track the size changes of protease-sensitive protein-core nanoparticles following degradation, by periodically sampling a subset of particles mixed with proteases. In the second method, a sandwich structure was utilized to observe, in real-time, the change in the properties of liquid nanolenses that were self-assembled around degrading nanoparticles, permitting continuous monitoring and quantification of the degradation process. These cost-effective holographic imaging based techniques enable high-throughput monitoring of the degradation of any type of nanoparticle, using an extremely small amount of sample volume that is at least 3 orders of magnitude smaller than what is required by, for example, DLS-based techniques.

  2. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  3. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    Science.gov (United States)

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  4. High Throughput Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...

  5. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer?s-Associated A? Oligomers

    OpenAIRE

    Wilcox, Kyle C.; Marunde, Matthew R.; Das, Aditi; Velasco, Pauline T.; Kuhns, Benjamin D.; Marty, Michael T.; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G.; Klein, William L.

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein int...

  6. Spatial Mapping of Protein Abundances in the Mouse Brain by Voxelation Integrated with High-Throughput Liquid Chromatography ? Mass Spectrometry

    International Nuclear Information System (INIS)

    Petyuk, Vladislav A.; Qian, Weijun; Chin, Mark H.; Wang, Haixing H.; Livesay, Eric A.; Monroe, Matthew E.; Adkins, Joshua N.; Jaitly, Navdeep; Anderson, David J.; Camp, David G.; Smith, Desmond J.; Smith, Richard D.

    2007-01-01

    Temporally and spatially resolved mapping of protein abundance patterns within the mammalian brain is of significant interest for understanding brain function and molecular etiologies of neurodegenerative diseases; however, such imaging efforts have been greatly challenged by complexity of the proteome, throughput and sensitivity of applied analytical methodologies, and accurate quantitation of protein abundances across the brain. Here, we describe a methodology for comprehensive spatial proteome mapping that addresses these challenges by employing voxelation integrated with automated microscale sample processing, high-throughput LC system coupled with high resolution Fourier transform ion cyclotron mass spectrometer and a ''universal'' stable isotope labeled reference sample approach for robust quantitation. We applied this methodology as a proof-of-concept trial for the analysis of protein distribution within a single coronal slice of a C57BL/6J mouse brain. For relative quantitation of the protein abundances across the slice, an 18O-isotopically labeled reference sample, derived from a whole control coronal slice from another mouse, was spiked into each voxel sample and stable isotopic intensity ratios were used to obtain measures of relative protein abundances. In total, we generated maps of protein abundance patterns for 1,028 proteins. The significant agreement of the protein distributions with previously reported data supports the validity of this methodology, which opens new opportunities for studying the spatial brain proteome and its dynamics during the course of disease progression and other important biological and associated health aspects in a discovery-driven fashion

  7. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  8. C4 photosynthetic machinery: insights from maize chloroplast proteomics

    Directory of Open Access Journals (Sweden)

    Qi eZhao

    2013-04-01

    Full Text Available C4 plants exhibit much higher CO2 assimilation rates than C3 plants. The specialized differentiation of mesophyll cell (M and bundle sheath cell (BS type chloroplasts is unique to C4 plants and improves photosynthesis efficiency. Maize (Zea mays is an important crop and model with C4 photosynthetic machinery. Current high-throughput quantitative proteomics approaches (e.g., 2DE, iTRAQ, and shotgun proteomics have been employed to investigate maize chloroplast structure and function. These proteomic studies have provided valuable information on C4 chloroplast protein components, photosynthesis, and other metabolic mechanisms underlying chloroplast biogenesis, stromal and membrane differentiation, as well as response to salinity, high/low temperature, and light stress. This review presents an overview of proteomics advances in maize chloroplast biology.

  9. Proteomics of Plant Pathogenic Fungi

    Directory of Open Access Journals (Sweden)

    Raquel González-Fernández

    2010-01-01

    Full Text Available Plant pathogenic fungi cause important yield losses in crops. In order to develop efficient and environmental friendly crop protection strategies, molecular studies of the fungal biological cycle, virulence factors, and interaction with its host are necessary. For that reason, several approaches have been performed using both classical genetic, cell biology, and biochemistry and the modern, holistic, and high-throughput, omic techniques. This work briefly overviews the tools available for studying Plant Pathogenic Fungi and is amply focused on MS-based Proteomics analysis, based on original papers published up to December 2009. At a methodological level, different steps in a proteomic workflow experiment are discussed. Separate sections are devoted to fungal descriptive (intracellular, subcellular, extracellular and differential expression proteomics and interactomics. From the work published we can conclude that Proteomics, in combination with other techniques, constitutes a powerful tool for providing important information about pathogenicity and virulence factors, thus opening up new possibilities for crop disease diagnosis and crop protection.

  10. Cardiovascular proteomics in the era of big data: experimental and computational advances.

    Science.gov (United States)

    Lam, Maggie P Y; Lau, Edward; Ng, Dominic C M; Wang, Ding; Ping, Peipei

    2016-01-01

    Proteomics plays an increasingly important role in our quest to understand cardiovascular biology. Fueled by analytical and computational advances in the past decade, proteomics applications can now go beyond merely inventorying protein species, and address sophisticated questions on cardiac physiology. The advent of massive mass spectrometry datasets has in turn led to increasing intersection between proteomics and big data science. Here we review new frontiers in technological developments and their applications to cardiovascular medicine. The impact of big data science on cardiovascular proteomics investigations and translation to medicine is highlighted.

  11. Hydroponic isotope labeling of entire plants and high-performance mass spectrometry for quantitative plant proteomics.

    Science.gov (United States)

    Bindschedler, Laurence V; Mills, Davinia J S; Cramer, Rainer

    2012-01-01

    Hydroponic isotope labeling of entire plants (HILEP) combines hydroponic plant cultivation and metabolic labeling with stable isotopes using (15)N-containing inorganic salts to label whole and mature plants. Employing (15)N salts as the sole nitrogen source for HILEP leads to the production of healthy-looking plants which contain (15)N proteins labeled to nearly 100%. Therefore, HILEP is suitable for quantitative plant proteomic analysis, where plants are grown in either (14)N- or (15)N-hydroponic media and pooled when the biological samples are collected for relative proteome quantitation. The pooled (14)N-/(15)N-protein extracts can be fractionated in any suitable way and digested with a protease for shotgun proteomics, using typically reverse phase liquid chromatography nanoelectrospray ionization tandem mass spectrometry (RPLC-nESI-MS/MS). Best results were obtained with a hybrid ion trap/FT-MS mass spectrometer, combining high mass accuracy and sensitivity for the MS data acquisition with speed and high-throughput MS/MS data acquisition, increasing the number of proteins identified and quantified and improving protein quantitation. Peak processing and picking from raw MS data files, protein identification, and quantitation were performed in a highly automated way using integrated MS data analysis software with minimum manual intervention, thus easing the analytical workflow. In this methodology paper, we describe how to grow Arabidopsis plants hydroponically for isotope labeling using (15)N salts and how to quantitate the resulting proteomes using a convenient workflow that does not require extensive bioinformatics skills.

  12. High Throughput WAN Data Transfer with Hadoop-based Storage

    Science.gov (United States)

    Amin, A.; Bockelman, B.; Letts, J.; Levshina, T.; Martin, T.; Pi, H.; Sfiligoi, I.; Thomas, M.; Wüerthwein, F.

    2011-12-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  13. High Throughput WAN Data Transfer with Hadoop-based Storage

    International Nuclear Information System (INIS)

    Amin, A; Thomas, M; Bockelman, B; Letts, J; Martin, T; Pi, H; Sfiligoi, I; Wüerthwein, F; Levshina, T

    2011-01-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  14. Network-based analysis of proteomic profiles

    KAUST Repository

    Wong, Limsoon

    2016-01-26

    Mass spectrometry (MS)-based proteomics is a widely used and powerful tool for profiling systems-wide protein expression changes. It can be applied for various purposes, e.g. biomarker discovery in diseases and study of drug responses. Although RNA-based high-throughput methods have been useful in providing glimpses into the underlying molecular processes, the evidences they provide are indirect. Furthermore, RNA and corresponding protein levels have been known to have poor correlation. On the other hand, MS-based proteomics tend to have consistency issues (poor reproducibility and inter-sample agreement) and coverage issues (inability to detect the entire proteome) that need to be urgently addressed. In this talk, I will discuss how these issues can be addressed by proteomic profile analysis techniques that use biological networks (especially protein complexes) as the biological context. In particular, I will describe several techniques that we have been developing for network-based analysis of proteomics profile. And I will present evidence that these techniques are useful in identifying proteomics-profile analysis results that are more consistent, more reproducible, and more biologically coherent, and that these techniques allow expansion of the detected proteome to uncover and/or discover novel proteins.

  15. High-throughput continuous cryopump

    International Nuclear Information System (INIS)

    Foster, C.A.

    1986-01-01

    A cryopump with a unique method of regeneration which allows continuous operation at high throughput has been constructed and tested. Deuterium was pumped continuously at a throughput of 30 Torr.L/s at a speed of 2000 L/s and a compression ratio of 200. Argon was pumped at a throughput of 60 Torr.L/s at a speed of 1275 L/s. To produce continuous operation of the pump, a method of regeneration that does not thermally cycle the pump is employed. A small chamber (the ''snail'') passes over the pumping surface and removes the frost from it either by mechanical action with a scraper or by local heating. The material removed is topologically in a secondary vacuum system with low conductance into the primary vacuum; thus, the exhaust can be pumped at pressures up to an effective compression ratio determined by the ratio of the pumping speed to the leakage conductance of the snail. The pump, which is all-metal-sealed and dry and which regenerates every 60 s, would be an ideal system for pumping tritium. Potential fusion applications are for mpmp limiters, for repeating pneumatic pellet injection lines, and for the centrifuge pellet injector spin tank, all of which will require pumping tritium at high throughput. Industrial applications requiring ultraclean pumping of corrosive gases at high throughput, such as the reactive ion etch semiconductor process, may also be feasible

  16. Computational and statistical methods for high-throughput mass spectrometry-based PTM analysis

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Vaudel, Marc

    2017-01-01

    Cell signaling and functions heavily rely on post-translational modifications (PTMs) of proteins. Their high-throughput characterization is thus of utmost interest for multiple biological and medical investigations. In combination with efficient enrichment methods, peptide mass spectrometry analy...

  17. Open source libraries and frameworks for mass spectrometry based proteomics: a developer's perspective.

    Science.gov (United States)

    Perez-Riverol, Yasset; Wang, Rui; Hermjakob, Henning; Müller, Markus; Vesada, Vladimir; Vizcaíno, Juan Antonio

    2014-01-01

    Data processing, management and visualization are central and critical components of a state of the art high-throughput mass spectrometry (MS)-based proteomics experiment, and are often some of the most time-consuming steps, especially for labs without much bioinformatics support. The growing interest in the field of proteomics has triggered an increase in the development of new software libraries, including freely available and open-source software. From database search analysis to post-processing of the identification results, even though the objectives of these libraries and packages can vary significantly, they usually share a number of features. Common use cases include the handling of protein and peptide sequences, the parsing of results from various proteomics search engines output files, and the visualization of MS-related information (including mass spectra and chromatograms). In this review, we provide an overview of the existing software libraries, open-source frameworks and also, we give information on some of the freely available applications which make use of them. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Open source libraries and frameworks for mass spectrometry based proteomics: A developer's perspective☆

    Science.gov (United States)

    Perez-Riverol, Yasset; Wang, Rui; Hermjakob, Henning; Müller, Markus; Vesada, Vladimir; Vizcaíno, Juan Antonio

    2014-01-01

    Data processing, management and visualization are central and critical components of a state of the art high-throughput mass spectrometry (MS)-based proteomics experiment, and are often some of the most time-consuming steps, especially for labs without much bioinformatics support. The growing interest in the field of proteomics has triggered an increase in the development of new software libraries, including freely available and open-source software. From database search analysis to post-processing of the identification results, even though the objectives of these libraries and packages can vary significantly, they usually share a number of features. Common use cases include the handling of protein and peptide sequences, the parsing of results from various proteomics search engines output files, and the visualization of MS-related information (including mass spectra and chromatograms). In this review, we provide an overview of the existing software libraries, open-source frameworks and also, we give information on some of the freely available applications which make use of them. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467006

  19. Fusarium graminearum and Its Interactions with Cereal Heads: Studies in the Proteomics Era

    DEFF Research Database (Denmark)

    Yang, Fen; Jacobsen, Susanne; Jørgensen, Hans J L

    2013-01-01

    of humans and animals. In recent years, high-throughput proteomics, aiming at identifying a broad spectrum of proteins with a potential role in the pathogenicity and host resistance, has become a very useful tool in plant-fungus interaction research. In this review, we describe the progress in proteomics...... applications toward a better understanding of pathogenesis, virulence, and host defense mechanisms. The contribution of proteomics to the development of crop protection strategies against this pathogen is also discussed briefly....

  20. Computational toxicology as implemented by the U.S. EPA: providing high throughput decision support tools for screening and assessing chemical exposure, hazard and risk.

    Science.gov (United States)

    Kavlock, Robert; Dix, David

    2010-02-01

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly

  1. Lessons we learned from high-throughput and top-down systems biology analyses about glioma stem cells.

    Science.gov (United States)

    Mock, Andreas; Chiblak, Sara; Herold-Mende, Christel

    2014-01-01

    A growing body of evidence suggests that glioma stem cells (GSCs) account for tumor initiation, therapy resistance, and the subsequent regrowth of gliomas. Thus, continuous efforts have been undertaken to further characterize this subpopulation of less differentiated tumor cells. Although we are able to enrich GSCs, we still lack a comprehensive understanding of GSC phenotypes and behavior. The advent of high-throughput technologies raised hope that incorporation of these newly developed platforms would help to tackle such questions. Since then a couple of comparative genome-, transcriptome- and proteome-wide studies on GSCs have been conducted giving new insights in GSC biology. However, lessons had to be learned in designing high-throughput experiments and some of the resulting conclusions fell short of expectations because they were performed on only a few GSC lines or at one molecular level instead of an integrative poly-omics approach. Despite these shortcomings, our knowledge of GSC biology has markedly expanded due to a number of survival-associated biomarkers as well as glioma-relevant signaling pathways and therapeutic targets being identified. In this article we review recent findings obtained by comparative high-throughput analyses of GSCs. We further summarize fundamental concepts of systems biology as well as its applications for glioma stem cell research.

  2. Statistics for proteomics : A review of tools for analyzing experimental data

    NARCIS (Netherlands)

    Urfer, Wolfgang; Grzegorczyk, Marco; Jung, Klaus

    Most proteomics experiments make use of 'high throughput' technologies such a's 2-DE, MS or protein arrays to measure simultaneously the expression levels of thousands of proteins. Such experiments yield, large, high-dimensional data sets which usually reflect not only the biological but also

  3. High-throughput computational methods and software for quantitative trait locus (QTL) mapping

    NARCIS (Netherlands)

    Arends, Danny

    2014-01-01

    De afgelopen jaren zijn vele nieuwe technologieen zoals Tiling arrays en High throughput DNA sequencing een belangrijke rol gaan spelen binnen het onderzoeksveld van de systeem genetica. Voor onderzoekers is het extreem belangrijk om te begrijpen dat deze methodes hun manier van werken zullen gaan

  4. High-throughput characterization methods for lithium batteries

    Directory of Open Access Journals (Sweden)

    Yingchun Lyu

    2017-09-01

    Full Text Available The development of high-performance lithium ion batteries requires the discovery of new materials and the optimization of key components. By contrast with traditional one-by-one method, high-throughput method can synthesize and characterize a large number of compositionally varying samples, which is able to accelerate the pace of discovery, development and optimization process of materials. Because of rapid progress in thin film and automatic control technologies, thousands of compounds with different compositions could be synthesized rapidly right now, even in a single experiment. However, the lack of rapid or combinatorial characterization technologies to match with high-throughput synthesis methods, limit the application of high-throughput technology. Here, we review a series of representative high-throughput characterization methods used in lithium batteries, including high-throughput structural and electrochemical characterization methods and rapid measuring technologies based on synchrotron light sources.

  5. Automation, parallelism, and robotics for proteomics.

    Science.gov (United States)

    Alterovitz, Gil; Liu, Jonathan; Chow, Jijun; Ramoni, Marco F

    2006-07-01

    The speed of the human genome project (Lander, E. S., Linton, L. M., Birren, B., Nusbaum, C. et al., Nature 2001, 409, 860-921) was made possible, in part, by developments in automation of sequencing technologies. Before these technologies, sequencing was a laborious, expensive, and personnel-intensive task. Similarly, automation and robotics are changing the field of proteomics today. Proteomics is defined as the effort to understand and characterize proteins in the categories of structure, function and interaction (Englbrecht, C. C., Facius, A., Comb. Chem. High Throughput Screen. 2005, 8, 705-715). As such, this field nicely lends itself to automation technologies since these methods often require large economies of scale in order to achieve cost and time-saving benefits. This article describes some of the technologies and methods being applied in proteomics in order to facilitate automation within the field as well as in linking proteomics-based information with other related research areas.

  6. compomics-utilities: an open-source Java library for computational proteomics.

    Science.gov (United States)

    Barsnes, Harald; Vaudel, Marc; Colaert, Niklaas; Helsens, Kenny; Sickmann, Albert; Berven, Frode S; Martens, Lennart

    2011-03-08

    The growing interest in the field of proteomics has increased the demand for software tools and applications that process and analyze the resulting data. And even though the purpose of these tools can vary significantly, they usually share a basic set of features, including the handling of protein and peptide sequences, the visualization of (and interaction with) spectra and chromatograms, and the parsing of results from various proteomics search engines. Developers typically spend considerable time and effort implementing these support structures, which detracts from working on the novel aspects of their tool. In order to simplify the development of proteomics tools, we have implemented an open-source support library for computational proteomics, called compomics-utilities. The library contains a broad set of features required for reading, parsing, and analyzing proteomics data. compomics-utilities is already used by a long list of existing software, ensuring library stability and continued support and development. As a user-friendly, well-documented and open-source library, compomics-utilities greatly simplifies the implementation of the basic features needed in most proteomics tools. Implemented in 100% Java, compomics-utilities is fully portable across platforms and architectures. Our library thus allows the developers to focus on the novel aspects of their tools, rather than on the basic functions, which can contribute substantially to faster development, and better tools for proteomics.

  7. compomics-utilities: an open-source Java library for computational proteomics

    Directory of Open Access Journals (Sweden)

    Helsens Kenny

    2011-03-01

    Full Text Available Abstract Background The growing interest in the field of proteomics has increased the demand for software tools and applications that process and analyze the resulting data. And even though the purpose of these tools can vary significantly, they usually share a basic set of features, including the handling of protein and peptide sequences, the visualization of (and interaction with spectra and chromatograms, and the parsing of results from various proteomics search engines. Developers typically spend considerable time and effort implementing these support structures, which detracts from working on the novel aspects of their tool. Results In order to simplify the development of proteomics tools, we have implemented an open-source support library for computational proteomics, called compomics-utilities. The library contains a broad set of features required for reading, parsing, and analyzing proteomics data. compomics-utilities is already used by a long list of existing software, ensuring library stability and continued support and development. Conclusions As a user-friendly, well-documented and open-source library, compomics-utilities greatly simplifies the implementation of the basic features needed in most proteomics tools. Implemented in 100% Java, compomics-utilities is fully portable across platforms and architectures. Our library thus allows the developers to focus on the novel aspects of their tools, rather than on the basic functions, which can contribute substantially to faster development, and better tools for proteomics.

  8. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Energy Technology Data Exchange (ETDEWEB)

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  9. High throughput sequencing and proteomics to identify immunogenic proteins of a new pathogen: the dirty genome approach.

    Science.gov (United States)

    Greub, Gilbert; Kebbi-Beghdadi, Carole; Bertelli, Claire; Collyn, François; Riederer, Beat M; Yersin, Camille; Croxatto, Antony; Raoult, Didier

    2009-12-23

    With the availability of new generation sequencing technologies, bacterial genome projects have undergone a major boost. Still, chromosome completion needs a costly and time-consuming gap closure, especially when containing highly repetitive elements. However, incomplete genome data may be sufficiently informative to derive the pursued information. For emerging pathogens, i.e. newly identified pathogens, lack of release of genome data during gap closure stage is clearly medically counterproductive. We thus investigated the feasibility of a dirty genome approach, i.e. the release of unfinished genome sequences to develop serological diagnostic tools. We showed that almost the whole genome sequence of the emerging pathogen Parachlamydia acanthamoebae was retrieved even with relatively short reads from Genome Sequencer 20 and Solexa. The bacterial proteome was analyzed to select immunogenic proteins, which were then expressed and used to elaborate the first steps of an ELISA. This work constitutes the proof of principle for a dirty genome approach, i.e. the use of unfinished genome sequences of pathogenic bacteria, coupled with proteomics to rapidly identify new immunogenic proteins useful to develop in the future specific diagnostic tests such as ELISA, immunohistochemistry and direct antigen detection. Although applied here to an emerging pathogen, this combined dirty genome sequencing/proteomic approach may be used for any pathogen for which better diagnostics are needed. These genome sequences may also be very useful to develop DNA based diagnostic tests. All these diagnostic tools will allow further evaluations of the pathogenic potential of this obligate intracellular bacterium.

  10. Computational Omics Pre-Awardees | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the pre-awardees of the Computational Omics solicitation. Working with NVIDIA Foundation's Compute the Cure initiative and Leidos Biomedical Research Inc., the NCI, through this solicitation, seeks to leverage computational efforts to provide tools for the mining and interpretation of large-scale publicly available ‘omics’ datasets.

  11. The high throughput biomedicine unit at the institute for molecular medicine Finland: high throughput screening meets precision medicine.

    Science.gov (United States)

    Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister

    2014-05-01

    The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.

  12. Differential membrane proteomics using 18O-labeling to identify biomarkers for cholangiocarcinoma

    DEFF Research Database (Denmark)

    Kristiansen, Troels Zakarias; Harsha, H C; Grønborg, Mads

    2008-01-01

    Quantitative proteomic methodologies allow profiling of hundreds to thousands of proteins in a high-throughput fashion. This approach is increasingly applied to cancer biomarker discovery to identify proteins that are differentially regulated in cancers. Fractionation of protein samples based...

  13. Proteomic Analysis of Metabolic Responses to Biofuels and Chemicals in Photosynthetic Cyanobacteria.

    Science.gov (United States)

    Sun, T; Chen, L; Zhang, W

    2017-01-01

    Recent progresses in various "omics" technologies have enabled quantitative measurements of biological molecules in a high-throughput manner. Among them, high-throughput proteomics is a rapidly advancing field that offers a new means to quantify metabolic changes at protein level, which has significantly facilitated our understanding of cellular process, such as protein synthesis, posttranslational modifications, and degradation in responding to environmental perturbations. Cyanobacteria are autotrophic prokaryotes that can perform oxygenic photosynthesis and have recently attracted significant attentions as one promising alternative to traditionally biomass-based "microbial cell factories" to produce green fuels and chemicals. However, early studies have shown that the low tolerance to toxic biofuels and chemicals represented one major hurdle for further improving productivity of the cyanobacterial production systems. To address the issue, metabolic responses and their regulation of cyanobacterial cells to toxic end-products need to be defined. In this chapter, we discuss recent progresses in interpreting cyanobacterial responses to biofuels and chemicals using high-throughput proteomics approach, aiming to provide insights and guidelines on how to enhance tolerance and productivity of biofuels or chemicals in the renewable cyanobacteria systems in the future. © 2017 Elsevier Inc. All rights reserved.

  14. High throughput sequencing and proteomics to identify immunogenic proteins of a new pathogen: the dirty genome approach.

    Directory of Open Access Journals (Sweden)

    Gilbert Greub

    Full Text Available BACKGROUND: With the availability of new generation sequencing technologies, bacterial genome projects have undergone a major boost. Still, chromosome completion needs a costly and time-consuming gap closure, especially when containing highly repetitive elements. However, incomplete genome data may be sufficiently informative to derive the pursued information. For emerging pathogens, i.e. newly identified pathogens, lack of release of genome data during gap closure stage is clearly medically counterproductive. METHODS/PRINCIPAL FINDINGS: We thus investigated the feasibility of a dirty genome approach, i.e. the release of unfinished genome sequences to develop serological diagnostic tools. We showed that almost the whole genome sequence of the emerging pathogen Parachlamydia acanthamoebae was retrieved even with relatively short reads from Genome Sequencer 20 and Solexa. The bacterial proteome was analyzed to select immunogenic proteins, which were then expressed and used to elaborate the first steps of an ELISA. CONCLUSIONS/SIGNIFICANCE: This work constitutes the proof of principle for a dirty genome approach, i.e. the use of unfinished genome sequences of pathogenic bacteria, coupled with proteomics to rapidly identify new immunogenic proteins useful to develop in the future specific diagnostic tests such as ELISA, immunohistochemistry and direct antigen detection. Although applied here to an emerging pathogen, this combined dirty genome sequencing/proteomic approach may be used for any pathogen for which better diagnostics are needed. These genome sequences may also be very useful to develop DNA based diagnostic tests. All these diagnostic tools will allow further evaluations of the pathogenic potential of this obligate intracellular bacterium.

  15. Towards high throughput and spatiotemporal proteomics : analytical workflows and quantitative label-free mass spectrometry

    NARCIS (Netherlands)

    Mostovenko, Ekaterina

    2013-01-01

    A large part of modern biology is dedicated to the functional annotation and interpretation of genetic information and its influence on the subject’s phenotype. Proteomics describes the state of the system from the perspective of expression, structure, localization, interaction and function of the

  16. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  17. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  18. Proteomics - new analytical approaches

    International Nuclear Information System (INIS)

    Hancock, W.S.

    2001-01-01

    Full text: Recent developments in the sequencing of the human genome have indicated that the number of coding gene sequences may be as few as 30,000. It is clear, however, that the complexity of the human species is dependent on the much greater diversity of the corresponding protein complement. Estimates of the diversity (discrete protein species) of the human proteome range from 200,000 to 300,000 at the lower end to 2,000,000 to 3,000,000 at the high end. In addition, proteomics (the study of the protein complement to the genome) has been subdivided into two main approaches. Global proteomics refers to a high throughput examination of the full protein set present in a cell under a given environmental condition. Focused proteomics refers to a more detailed study of a restricted set of proteins that are related to a specified biochemical pathway or subcellular structure. While many of the advances in proteomics will be based on the sequencing of the human genome, de novo characterization of protein microheterogeneity (glycosylation, phosphorylation and sulfation as well as the incorporation of lipid components) will be required in disease studies. To characterize these modifications it is necessary to digest the protein mixture with an enzyme to produce the corresponding mixture of peptides. In a process analogous to sequencing of the genome, shot-gun sequencing of the proteome is based on the characterization of the key fragments produced by such a digest. Thus, a glycopeptide and hence a specific glycosylation motif will be identified by a unique mass and then a diagnostic MS/MS spectrum. Mass spectrometry will be the preferred detector in these applications because of the unparalleled information content provided by one or more dimensions of mass measurement. In addition, highly efficient separation processes are an absolute requirement for advanced proteomic studies. For example, a combination of the orthogonal approaches, HPLC and HPCE, can be very powerful

  19. REDItools: high-throughput RNA editing detection made easy.

    Science.gov (United States)

    Picardi, Ernesto; Pesole, Graziano

    2013-07-15

    The reliable detection of RNA editing sites from massive sequencing data remains challenging and, although several methodologies have been proposed, no computational tools have been released to date. Here, we introduce REDItools a suite of python scripts to perform high-throughput investigation of RNA editing using next-generation sequencing data. REDItools are in python programming language and freely available at http://code.google.com/p/reditools/. ernesto.picardi@uniba.it or graziano.pesole@uniba.it Supplementary data are available at Bioinformatics online.

  20. Genomes to Proteomes

    Energy Technology Data Exchange (ETDEWEB)

    Panisko, Ellen A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Grigoriev, Igor [USDOE Joint Genome Inst., Walnut Creek, CA (United States); Daly, Don S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Webb-Robertson, Bobbie-Jo [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baker, Scott E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2009-03-01

    Biologists are awash with genomic sequence data. In large part, this is due to the rapid acceleration in the generation of DNA sequence that occurred as public and private research institutes raced to sequence the human genome. In parallel with the large human genome effort, mostly smaller genomes of other important model organisms were sequenced. Projects following on these initial efforts have made use of technological advances and the DNA sequencing infrastructure that was built for the human and other organism genome projects. As a result, the genome sequences of many organisms are available in high quality draft form. While in many ways this is good news, there are limitations to the biological insights that can be gleaned from DNA sequences alone; genome sequences offer only a bird's eye view of the biological processes endemic to an organism or community. Fortunately, the genome sequences now being produced at such a high rate can serve as the foundation for other global experimental platforms such as proteomics. Proteomic methods offer a snapshot of the proteins present at a point in time for a given biological sample. Current global proteomics methods combine enzymatic digestion, separations, mass spectrometry and database searching for peptide identification. One key aspect of proteomics is the prediction of peptide sequences from mass spectrometry data. Global proteomic analysis uses computational matching of experimental mass spectra with predicted spectra based on databases of gene models that are often generated computationally. Thus, the quality of gene models predicted from a genome sequence is crucial in the generation of high quality peptide identifications. Once peptides are identified they can be assigned to their parent protein. Proteins identified as expressed in a given experiment are most useful when compared to other expressed proteins in a larger biological context or biochemical pathway. In this chapter we will discuss the automatic

  1. High Throughput Neuro-Imaging Informatics

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2013-12-01

    Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.

  2. PatternLab for proteomics: a tool for differential shotgun proteomics

    Directory of Open Access Journals (Sweden)

    Yates John R

    2008-07-01

    Full Text Available Abstract Background A goal of proteomics is to distinguish between states of a biological system by identifying protein expression differences. Liu et al. demonstrated a method to perform semi-relative protein quantitation in shotgun proteomics data by correlating the number of tandem mass spectra obtained for each protein, or "spectral count", with its abundance in a mixture; however, two issues have remained open: how to normalize spectral counting data and how to efficiently pinpoint differences between profiles. Moreover, Chen et al. recently showed how to increase the number of identified proteins in shotgun proteomics by analyzing samples with different MS-compatible detergents while performing proteolytic digestion. The latter introduced new challenges as seen from the data analysis perspective, since replicate readings are not acquired. Results To address the open issues above, we present a program termed PatternLab for proteomics. This program implements existing strategies and adds two new methods to pinpoint differences in protein profiles. The first method, ACFold, addresses experiments with less than three replicates from each state or having assays acquired by different protocols as described by Chen et al. ACFold uses a combined criterion based on expression fold changes, the AC test, and the false-discovery rate, and can supply a "bird's-eye view" of differentially expressed proteins. The other method addresses experimental designs having multiple readings from each state and is referred to as nSVM (natural support vector machine because of its roots in evolutionary computing and in statistical learning theory. Our observations suggest that nSVM's niche comprises projects that select a minimum set of proteins for classification purposes; for example, the development of an early detection kit for a given pathology. We demonstrate the effectiveness of each method on experimental data and confront them with existing strategies

  3. High throughput diffractive multi-beam femtosecond laser processing using a spatial light modulator

    Energy Technology Data Exchange (ETDEWEB)

    Kuang Zheng [Laser Group, Department of Engineering, University of Liverpool Brownlow Street, Liverpool L69 3GQ (United Kingdom)], E-mail: z.kuang@liv.ac.uk; Perrie, Walter [Laser Group, Department of Engineering, University of Liverpool Brownlow Street, Liverpool L69 3GQ (United Kingdom); Leach, Jonathan [Department of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Sharp, Martin; Edwardson, Stuart P. [Laser Group, Department of Engineering, University of Liverpool Brownlow Street, Liverpool L69 3GQ (United Kingdom); Padgett, Miles [Department of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Dearden, Geoff; Watkins, Ken G. [Laser Group, Department of Engineering, University of Liverpool Brownlow Street, Liverpool L69 3GQ (United Kingdom)

    2008-12-30

    High throughput femtosecond laser processing is demonstrated by creating multiple beams using a spatial light modulator (SLM). The diffractive multi-beam patterns are modulated in real time by computer generated holograms (CGHs), which can be calculated by appropriate algorithms. An interactive LabVIEW program is adopted to generate the relevant CGHs. Optical efficiency at this stage is shown to be {approx}50% into first order beams and real time processing has been carried out at 50 Hz refresh rate. Results obtained demonstrate high precision surface micro-structuring on silicon and Ti6Al4V with throughput gain >1 order of magnitude.

  4. NMR in the SPINE Structural Proteomics project.

    Science.gov (United States)

    Ab, E; Atkinson, A R; Banci, L; Bertini, I; Ciofi-Baffoni, S; Brunner, K; Diercks, T; Dötsch, V; Engelke, F; Folkers, G E; Griesinger, C; Gronwald, W; Günther, U; Habeck, M; de Jong, R N; Kalbitzer, H R; Kieffer, B; Leeflang, B R; Loss, S; Luchinat, C; Marquardsen, T; Moskau, D; Neidig, K P; Nilges, M; Piccioli, M; Pierattelli, R; Rieping, W; Schippmann, T; Schwalbe, H; Travé, G; Trenner, J; Wöhnert, J; Zweckstetter, M; Kaptein, R

    2006-10-01

    This paper describes the developments, role and contributions of the NMR spectroscopy groups in the Structural Proteomics In Europe (SPINE) consortium. Focusing on the development of high-throughput (HTP) pipelines for NMR structure determinations of proteins, all aspects from sample preparation, data acquisition, data processing, data analysis to structure determination have been improved with respect to sensitivity, automation, speed, robustness and validation. Specific highlights are protonless (13)C-direct detection methods and inferential structure determinations (ISD). In addition to technological improvements, these methods have been applied to deliver over 60 NMR structures of proteins, among which are five that failed to crystallize. The inclusion of NMR spectroscopy in structural proteomics pipelines improves the success rate for protein structure determinations.

  5. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  6. Controlling high-throughput manufacturing at the nano-scale

    Science.gov (United States)

    Cooper, Khershed P.

    2013-09-01

    Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.

  7. Operational evaluation of high-throughput community-based mass prophylaxis using Just-in-time training.

    Science.gov (United States)

    Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei

    2007-01-01

    Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.

  8. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  9. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  10. A bead-based western for high-throughput cellular signal transduction analyses

    Science.gov (United States)

    Treindl, Fridolin; Ruprecht, Benjamin; Beiter, Yvonne; Schultz, Silke; Döttinger, Anette; Staebler, Annette; Joos, Thomas O.; Kling, Simon; Poetz, Oliver; Fehm, Tanja; Neubauer, Hans; Kuster, Bernhard; Templin, Markus F.

    2016-01-01

    Dissecting cellular signalling requires the analysis of large number of proteins. The DigiWest approach we describe here transfers the western blot to a bead-based microarray platform. By combining gel-based protein separation with immobilization on microspheres, hundreds of replicas of the initial blot are created, thus enabling the comprehensive analysis of limited material, such as cells collected by laser capture microdissection, and extending traditional western blotting to reach proteomic scales. The combination of molecular weight resolution, sensitivity and signal linearity on an automated platform enables the rapid quantification of hundreds of specific proteins and protein modifications in complex samples. This high-throughput western blot approach allowed us to identify and characterize alterations in cellular signal transduction that occur during the development of resistance to the kinase inhibitor Lapatinib, revealing major changes in the activation state of Ephrin-mediated signalling and a central role for p53-controlled processes. PMID:27659302

  11. Identification of redox-sensitive cysteines in the arabidopsis proteome using OxiTRAQ, a quantitative redox proteomics method

    KAUST Repository

    Liu, Pei; Zhang, Huoming; Wang, Hai; Xia, Yiji

    2014-01-01

    -throughput quantitative proteomic approach termed OxiTRAQ for identifying proteins whose thiols undergo reversible oxidative modifications in Arabidopsis cells subjected to oxidative stress. In this approach, a biotinylated thiol-reactive reagent is used for differential

  12. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer's-Associated Aβ Oligomers.

    Directory of Open Access Journals (Sweden)

    Kyle C Wilcox

    Full Text Available Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs. AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs. This method gives a soluble membrane protein library (SMPL--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can

  13. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer's-Associated Aβ Oligomers.

    Science.gov (United States)

    Wilcox, Kyle C; Marunde, Matthew R; Das, Aditi; Velasco, Pauline T; Kuhns, Benjamin D; Marty, Michael T; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G; Klein, William L

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug discovery

  14. Differential proteomics reveals the hallmarks of seed development in common bean (Phaseolus vulgaris L.)

    NARCIS (Netherlands)

    Parreira, J R; Bouraada, J; Fitzpatrick, M A; Silvestre, S; Bernardes da Silva, A; Marques da Silva, J; Almeida, A M; Fevereiro, P; Altelaar, A F M; Araújo, S S

    2016-01-01

    Common bean (Phaseolus vulgaris L.) is one of the most consumed staple foods worldwide. Little is known about the molecular mechanisms controlling seed development. This study aims to comprehensively describe proteome dynamics during seed development of common bean. A high-throughput gel-free

  15. Biomarker discovery and applications for foods and beverages: proteomics to nanoproteomics.

    Science.gov (United States)

    Agrawal, Ganesh Kumar; Timperio, Anna Maria; Zolla, Lello; Bansal, Vipul; Shukla, Ravi; Rakwal, Randeep

    2013-11-20

    Foods and beverages have been at the heart of our society for centuries, sustaining humankind - health, life, and the pleasures that go with it. The more we grow and develop as a civilization, the more we feel the need to know about the food we eat and beverages we drink. Moreover, with an ever increasing demand for food due to the growing human population food security remains a major concern. Food safety is another growing concern as the consumers prefer varied foods and beverages that are not only traded nationally but also globally. The 21st century science and technology is at a new high, especially in the field of biological sciences. The availability of genome sequences and associated high-throughput sensitive technologies means that foods are being analyzed at various levels. For example and in particular, high-throughput omics approaches are being applied to develop suitable biomarkers for foods and beverages and their applications in addressing quality, technology, authenticity, and safety issues. Proteomics are one of those technologies that are increasingly being utilized to profile expressed proteins in different foods and beverages. Acquired knowledge and protein information have now been translated to address safety of foods and beverages. Very recently, the power of proteomic technology has been integrated with another highly sensitive and miniaturized technology called nanotechnology, yielding a new term nanoproteomics. Nanoproteomics offer a real-time multiplexed analysis performed in a miniaturized assay, with low-sample consumption and high sensitivity. To name a few, nanomaterials - quantum dots, gold nanoparticles, carbon nanotubes, and nanowires - have demonstrated potential to overcome the challenges of sensitivity faced by proteomics for biomarker detection, discovery, and application. In this review, we will discuss the importance of biomarker discovery and applications for foods and beverages, the contribution of proteomic technology in

  16. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  17. High-throughput search for caloric materials: the CaloriCool approach

    Science.gov (United States)

    Zarkevich, N. A.; Johnson, D. D.; Pecharsky, V. K.

    2018-01-01

    The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. We begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.

  18. High Throughput Transcriptomics @ USEPA (Toxicology ...

    Science.gov (United States)

    The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.

  19. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  20. Application of ToxCast High-Throughput Screening and ...

    Science.gov (United States)

    Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors

  1. A High-Throughput Computational Framework for Identifying Significant Copy Number Aberrations from Array Comparative Genomic Hybridisation Data

    Directory of Open Access Journals (Sweden)

    Ian Roberts

    2012-01-01

    Full Text Available Reliable identification of copy number aberrations (CNA from comparative genomic hybridization data would be improved by the availability of a generalised method for processing large datasets. To this end, we developed swatCGH, a data analysis framework and region detection heuristic for computational grids. swatCGH analyses sequentially displaced (sliding windows of neighbouring probes and applies adaptive thresholds of varying stringency to identify the 10% of each chromosome that contains the most frequently occurring CNAs. We used the method to analyse a published dataset, comparing data preprocessed using four different DNA segmentation algorithms, and two methods for prioritising the detected CNAs. The consolidated list of the most commonly detected aberrations confirmed the value of swatCGH as a simplified high-throughput method for identifying biologically significant CNA regions of interest.

  2. High-throughput proteomics detection of novel splice isoforms in human platelets.

    LENUS (Irish Health Repository)

    Power, Karen A

    2009-01-01

    Alternative splicing (AS) is an intrinsic regulatory mechanism of all metazoans. Recent findings suggest that 100% of multiexonic human genes give rise to splice isoforms. AS can be specific to tissue type, environment or developmentally regulated. Splice variants have also been implicated in various diseases including cancer. Detection of these variants will enhance our understanding of the complexity of the human genome and provide disease-specific and prognostic biomarkers. We adopted a proteomics approach to identify exon skip events - the most common form of AS. We constructed a database harboring the peptide sequences derived from all hypothetical exon skip junctions in the human genome. Searching tandem mass spectrometry (MS\\/MS) data against the database allows the detection of exon skip events, directly at the protein level. Here we describe the application of this approach to human platelets, including the mRNA-based verification of novel splice isoforms of ITGA2, NPEPPS and FH. This methodology is applicable to all new or existing MS\\/MS datasets.

  3. A high throughput DNA extraction method with high yield and quality

    Directory of Open Access Journals (Sweden)

    Xin Zhanguo

    2012-07-01

    Full Text Available Abstract Background Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome, and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L. Moench] leaves and dry seeds with high yield, high quality, and affordable cost. Results We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. Conclusion A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.

  4. ms_lims, a simple yet powerful open source laboratory information management system for MS-driven proteomics.

    Science.gov (United States)

    Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart

    2010-03-01

    MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.

  5. Recent 5-year Findings and Technological Advances in the Proteomic Study of HIV-associated Disorders

    OpenAIRE

    Zhang, Lijun; Jia, Xiaofang; Jin, Jun-O; Lu, Hongzhou; Tan, Zhimi

    2017-01-01

    Human immunodeficiency virus-1 (HIV-1) mainly relies on host factors to complete its life cycle. Hence, it is very important to identify HIV-regulated host proteins. Proteomics is an excellent technique for this purpose because of its high throughput and sensitivity. In this review, we summarized current technological advances in proteomics, including general isobaric tags for relative and absolute quantitation (iTRAQ) and stable isotope labeling by amino acids in cell culture (SILAC), as wel...

  6. High Throughput Plasma Water Treatment

    Science.gov (United States)

    Mujovic, Selman; Foster, John

    2016-10-01

    The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).

  7. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Wei; Shabbir, Faizan; Gong, Chao; Gulec, Cagatay; Pigeon, Jeremy; Shaw, Jessica; Greenbaum, Alon; Tochitsky, Sergei; Joshi, Chandrashekhar [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Ozcan, Aydogan, E-mail: ozcan@ucla.edu [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Bioengineering Department, University of California, Los Angeles, California 90095 (United States); California NanoSystems Institute (CNSI), University of California, Los Angeles, California 90095 (United States)

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processing units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.

  8. Subnuclear foci quantification using high-throughput 3D image cytometry

    Science.gov (United States)

    Wadduwage, Dushan N.; Parrish, Marcus; Choi, Heejin; Engelward, Bevin P.; Matsudaira, Paul; So, Peter T. C.

    2015-07-01

    Ionising radiation causes various types of DNA damages including double strand breaks (DSBs). DSBs are often recognized by DNA repair protein ATM which forms gamma-H2AX foci at the site of the DSBs that can be visualized using immunohistochemistry. However most of such experiments are of low throughput in terms of imaging and image analysis techniques. Most of the studies still use manual counting or classification. Hence they are limited to counting a low number of foci per cell (5 foci per nucleus) as the quantification process is extremely labour intensive. Therefore we have developed a high throughput instrumentation and computational pipeline specialized for gamma-H2AX foci quantification. A population of cells with highly clustered foci inside nuclei were imaged, in 3D with submicron resolution, using an in-house developed high throughput image cytometer. Imaging speeds as high as 800 cells/second in 3D were achieved by using HiLo wide-field depth resolved imaging and a remote z-scanning technique. Then the number of foci per cell nucleus were quantified using a 3D extended maxima transform based algorithm. Our results suggests that while most of the other 2D imaging and manual quantification studies can count only up to about 5 foci per nucleus our method is capable of counting more than 100. Moreover we show that 3D analysis is significantly superior compared to the 2D techniques.

  9. A modified FASP protocol for high-throughput preparation of protein samples for mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Jeremy Potriquet

    Full Text Available To facilitate high-throughput proteomic analyses we have developed a modified FASP protocol which improves the rate at which protein samples can be processed prior to mass spectrometry. Adapting the original FASP protocol to a 96-well format necessitates extended spin times for buffer exchange due to the low centrifugation speeds tolerated by these devices. However, by using 96-well plates with a more robust polyethersulfone molecular weight cutoff membrane, instead of the cellulose membranes typically used in these devices, we could use isopropanol as a wetting agent, decreasing spin times required for buffer exchange from an hour to 30 minutes. In a typical work flow used in our laboratory this equates to a reduction of 3 hours per plate, providing processing times similar to FASP for the processing of up to 96 samples per plate. To test whether our modified protocol produced similar results to FASP and other FASP-like protocols we compared the performance of our modified protocol to the original FASP and the more recently described eFASP and MStern-blot. We show that all FASP-like methods, including our modified protocol, display similar performance in terms of proteins identified and reproducibility. Our results show that our modified FASP protocol is an efficient method for the high-throughput processing of protein samples for mass spectral analysis.

  10. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer’s-Associated Aβ Oligomers

    Science.gov (United States)

    Wilcox, Kyle C.; Marunde, Matthew R.; Das, Aditi; Velasco, Pauline T.; Kuhns, Benjamin D.; Marty, Michael T.; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G.; Klein, William L.

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer’s dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)—a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer’s model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug

  11. High throughput techniques to reveal the molecular physiology and evolution of digestion in spiders.

    Science.gov (United States)

    Fuzita, Felipe J; Pinkse, Martijn W H; Patane, José S L; Verhaert, Peter D E M; Lopes, Adriana R

    2016-09-07

    Spiders are known for their predatory efficiency and for their high capacity of digesting relatively large prey. They do this by combining both extracorporeal and intracellular digestion. Whereas many high throughput ("-omics") techniques focus on biomolecules in spider venom, so far this approach has not yet been applied to investigate the protein composition of spider midgut diverticula (MD) and digestive fluid (DF). We here report on our investigations of both MD and DF of the spider Nephilingis (Nephilengys) cruentata through the use of next generation sequencing and shotgun proteomics. This shows that the DF is composed of a variety of hydrolases including peptidases, carbohydrases, lipases and nuclease, as well as of toxins and regulatory proteins. We detect 25 astacins in the DF. Phylogenetic analysis of the corresponding transcript(s) in Arachnida suggests that astacins have acquired an unprecedented role for extracorporeal digestion in Araneae, with different orthologs used by each family. The results of a comparative study of spiders in distinct physiological conditions allow us to propose some digestion mechanisms in this interesting animal taxon. All the high throughput data allowed the demonstration that DF is a secretion originating from the MD. We identified enzymes involved in the extracellular and intracellular phases of digestion. Besides that, data analyses show a large gene duplication event in Araneae digestive process evolution, mainly of astacin genes. We were also able to identify proteins expressed and translated in the digestive system, which until now had been exclusively associated to venom glands.

  12. High-throughput peptide mass fingerprinting and protein macroarray analysis using chemical printing strategies

    International Nuclear Information System (INIS)

    Sloane, A.J.; Duff, J.L.; Hopwood, F.G.; Wilson, N.L.; Smith, P.E.; Hill, C.J.; Packer, N.H.; Williams, K.L.; Gooley, A.A.; Cole, R.A.; Cooley, P.W.; Wallace, D.B.

    2001-01-01

    We describe a 'chemical printer' that uses piezoelectric pulsing for rapid and accurate microdispensing of picolitre volumes of fluid for proteomic analysis of 'protein macroarrays'. Unlike positive transfer and pin transfer systems, our printer dispenses fluid in a non-contact process that ensures that the fluid source cannot be contaminated by substrate during a printing event. We demonstrate automated delivery of enzyme and matrix solutions for on-membrane protein digestion and subsequent peptide mass fingerprinting (pmf) analysis directly from the membrane surface using matrix-assisted laser-desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS). This approach bypasses the more commonly used multi-step procedures, thereby permitting a more rapid procedure for protein identification. We also highlight the advantage of printing different chemistries onto an individual protein spot for multiple microscale analyses. This ability is particularly useful when detailed characterisation of rare and valuable sample is required. Using a combination of PNGase F and trypsin we have mapped sites of N-glycosylation using on-membrane digestion strategies. We also demonstrate the ability to print multiple serum samples in a micro-ELISA format and rapidly screen a protein macroarray of human blood plasma for pathogen-derived antigens. We anticipate that the 'chemical printer' will be a major component of proteomic platforms for high-throughput protein identification and characterisation with widespread applications in biomedical and diagnostic discovery

  13. Proteomics technique opens new frontiers in mobilome research.

    Science.gov (United States)

    Davidson, Andrew D; Matthews, David A; Maringer, Kevin

    2017-01-01

    A large proportion of the genome of most eukaryotic organisms consists of highly repetitive mobile genetic elements. The sum of these elements is called the "mobilome," which in eukaryotes is made up mostly of transposons. Transposable elements contribute to disease, evolution, and normal physiology by mediating genetic rearrangement, and through the "domestication" of transposon proteins for cellular functions. Although 'omics studies of mobilome genomes and transcriptomes are common, technical challenges have hampered high-throughput global proteomics analyses of transposons. In a recent paper, we overcame these technical hurdles using a technique called "proteomics informed by transcriptomics" (PIT), and thus published the first unbiased global mobilome-derived proteome for any organism (using cell lines derived from the mosquito Aedes aegypti ). In this commentary, we describe our methods in more detail, and summarise our major findings. We also use new genome sequencing data to show that, in many cases, the specific genomic element expressing a given protein can be identified using PIT. This proteomic technique therefore represents an important technological advance that will open new avenues of research into the role that proteins derived from transposons and other repetitive and sequence diverse genetic elements, such as endogenous retroviruses, play in health and disease.

  14. Skeletal muscle proteomics: current approaches, technical challenges and emerging techniques

    LENUS (Irish Health Repository)

    Ohlendieck, Kay

    2011-02-01

    Abstract Background Skeletal muscle fibres represent one of the most abundant cell types in mammals. Their highly specialised contractile and metabolic functions depend on a large number of membrane-associated proteins with very high molecular masses, proteins with extensive posttranslational modifications and components that exist in highly complex supramolecular structures. This makes it extremely difficult to perform conventional biochemical studies of potential changes in protein clusters during physiological adaptations or pathological processes. Results Skeletal muscle proteomics attempts to establish the global identification and biochemical characterisation of all members of the muscle-associated protein complement. A considerable number of proteomic studies have employed large-scale separation techniques, such as high-resolution two-dimensional gel electrophoresis or liquid chromatography, and combined them with mass spectrometry as the method of choice for high-throughput protein identification. Muscle proteomics has been applied to the comprehensive biochemical profiling of developing, maturing and aging muscle, as well as the analysis of contractile tissues undergoing physiological adaptations seen in disuse atrophy, physical exercise and chronic muscle transformation. Biomedical investigations into proteome-wide alterations in skeletal muscle tissues were also used to establish novel biomarker signatures of neuromuscular disorders. Importantly, mass spectrometric studies have confirmed the enormous complexity of posttranslational modifications in skeletal muscle proteins. Conclusions This review critically examines the scientific impact of modern muscle proteomics and discusses its successful application for a better understanding of muscle biology, but also outlines its technical limitations and emerging techniques to establish new biomarker candidates.

  15. Serum proteome profiling in canine idiopathic dilated cardiomyopathy using TMT-based quantitative proteomics approach.

    Science.gov (United States)

    Bilić, Petra; Guillemin, Nicolas; Kovačević, Alan; Beer Ljubić, Blanka; Jović, Ines; Galan, Asier; Eckersall, Peter David; Burchmore, Richard; Mrljak, Vladimir

    2018-05-15

    Idiopathic dilated cardiomyopathy (iDCM) is a primary myocardial disorder with an unknown aetiology, characterized by reduced contractility and ventricular dilation of the left or both ventricles. Naturally occurring canine iDCM was used herein to identify serum proteomic signature of the disease compared to the healthy state, providing an insight into underlying mechanisms and revealing proteins with biomarker potential. To achieve this, we used high-throughput label-based quantitative LC-MS/MS proteomics approach and bioinformatics analysis of the in silico inferred interactome protein network created from the initial list of differential proteins. To complement the proteomic analysis, serum biochemical parameters and levels of know biomarkers of cardiac function were measured. Several proteins with biomarker potential were identified, such as inter-alpha-trypsin inhibitor heavy chain H4, microfibril-associated glycoprotein 4 and apolipoprotein A-IV, which were validated using an independent method (Western blotting) and showed high specificity and sensitivity according to the receiver operating characteristic curve analysis. Bioinformatics analysis revealed involvement of different pathways in iDCM, such as complement cascade activation, lipoprotein particles dynamics, elastic fibre formation, GPCR signalling and respiratory electron transport chain. Idiopathic dilated cardiomyopathy is a severe primary myocardial disease of unknown cause, affecting both humans and dogs. This study is a contribution to the canine heart disease research by means of proteomic and bioinformatic state of the art analyses, following similar approach in human iDCM research. Importantly, we used serum as non-invasive and easily accessible biological source of information and contributed to the scarce data on biofluid proteome research on this topic. Bioinformatics analysis revealed biological pathways modulated in canine iDCM with potential of further targeted research. Also, several

  16. SeqAPASS to evaluate conservation of high-throughput screening targets across non-mammalian species

    Science.gov (United States)

    Cell-based high-throughput screening (HTS) and computational technologies are being applied as tools for toxicity testing in the 21st century. The U.S. Environmental Protection Agency (EPA) embraced these technologies and created the ToxCast Program in 2007, which has served as a...

  17. Recent 5-year Findings and Technological Advances in the Proteomic Study of HIV-associated Disorders.

    Science.gov (United States)

    Zhang, Lijun; Jia, Xiaofang; Jin, Jun-O; Lu, Hongzhou; Tan, Zhimi

    2017-04-01

    Human immunodeficiency virus-1 (HIV-1) mainly relies on host factors to complete its life cycle. Hence, it is very important to identify HIV-regulated host proteins. Proteomics is an excellent technique for this purpose because of its high throughput and sensitivity. In this review, we summarized current technological advances in proteomics, including general isobaric tags for relative and absolute quantitation (iTRAQ) and stable isotope labeling by amino acids in cell culture (SILAC), as well as subcellular proteomics and investigation of posttranslational modifications. Furthermore, we reviewed the applications of proteomics in the discovery of HIV-related diseases and HIV infection mechanisms. Proteins identified by proteomic studies might offer new avenues for the diagnosis and treatment of HIV infection and the related diseases. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  18. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  19. High Throughput PBTK: Open-Source Data and Tools for ...

    Science.gov (United States)

    Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy

  20. Manual evaluation of tissue microarrays in a high-throughput research project: The contribution of Indian surgical pathology to the Human Protein Atlas (HPA) project.

    Science.gov (United States)

    Navani, Sanjay

    2016-04-01

    The Human Protein Atlas (HPA) program (www.proteinatlas.org) is an international program that has been set up to allow for a systematic exploration of the human proteome using antibody-based proteomics. This is accomplished by combining high-throughput generation of affinity-purified (mono-specific) antibodies with protein profiling in a multitude of tissues/cell types assembled in tissue microarrays. Twenty-six surgical pathologists over a seven-and-half year period have annotated and curated approximately sixteen million tissue images derived from immunostaining of normal and cancer tissues by approximately 23 000 antibodies. Web-based annotation software that allows for a basic and rapid evaluation of immunoreactivity in tissues has been utilized. Intensity, fraction of immunoreactive cells and subcellular localization were recorded for each given cell population. A text comment summarizing the characteristics for each antibody was added. The methods used and the challenges encountered for this exercise, the largest effort ever by a single group of surgical pathologists, are discussed. Manual annotation of digital images is an important tool that may be successfully utilized in high-throughput research projects. This is the first time an Indian private pathology laboratory has been associated with cutting-edge research internationally providing a classic example of developed and emerging nation collaboration. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Clinical proteomics-driven precision medicine for targeted cancer therapy: current overview and future perspectives.

    Science.gov (United States)

    Zhou, Li; Wang, Kui; Li, Qifu; Nice, Edouard C; Zhang, Haiyuan; Huang, Canhua

    2016-01-01

    Cancer is a common disease that is a leading cause of death worldwide. Currently, early detection and novel therapeutic strategies are urgently needed for more effective management of cancer. Importantly, protein profiling using clinical proteomic strategies, with spectacular sensitivity and precision, offer excellent promise for the identification of potential biomarkers that would direct the development of targeted therapeutic anticancer drugs for precision medicine. In particular, clinical sample sources, including tumor tissues and body fluids (blood, feces, urine and saliva), have been widely investigated using modern high-throughput mass spectrometry-based proteomic approaches combined with bioinformatic analysis, to pursue the possibilities of precision medicine for targeted cancer therapy. Discussed in this review are the current advantages and limitations of clinical proteomics, the available strategies of clinical proteomics for the management of precision medicine, as well as the challenges and future perspectives of clinical proteomics-driven precision medicine for targeted cancer therapy.

  2. 3D material cytometry (3DMaC): a very high-replicate, high-throughput analytical method using microfabricated, shape-specific, cell-material niches.

    Science.gov (United States)

    Parratt, Kirsten; Jeong, Jenny; Qiu, Peng; Roy, Krishnendu

    2017-08-08

    Studying cell behavior within 3D material niches is key to understanding cell biology in health and diseases, and developing biomaterials for regenerative medicine applications. Current approaches to studying these cell-material niches have low throughput and can only analyze a few replicates per experiment resulting in reduced measurement assurance and analytical power. Here, we report 3D material cytometry (3DMaC), a novel high-throughput method based on microfabricated, shape-specific 3D cell-material niches and imaging cytometry. 3DMaC achieves rapid and highly multiplexed analyses of very high replicate numbers ("n" of 10 4 -10 6 ) of 3D biomaterial constructs. 3DMaC overcomes current limitations of low "n", low-throughput, and "noisy" assays, to provide rapid and simultaneous analyses of potentially hundreds of parameters in 3D biomaterial cultures. The method is demonstrated here for a set of 85 000 events containing twelve distinct cell-biomaterial micro-niches along with robust, customized computational methods for high-throughput analytics with potentially unprecedented statistical power.

  3. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    DEFF Research Database (Denmark)

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.

    2016-01-01

    to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the "generalist" (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions......Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked...... of these sectors for the general stress response sigma factor sigma(S). Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally...

  4. GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)

    2013-02-13

    In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.

  5. Application of high-throughput DNA sequencing in phytopathology.

    Science.gov (United States)

    Studholme, David J; Glover, Rachel H; Boonham, Neil

    2011-01-01

    The new sequencing technologies are already making a big impact in academic research on medically important microbes and may soon revolutionize diagnostics, epidemiology, and infection control. Plant pathology also stands to gain from exploiting these opportunities. This manuscript reviews some applications of these high-throughput sequencing methods that are relevant to phytopathology, with emphasis on the associated computational and bioinformatics challenges and their solutions. Second-generation sequencing technologies have recently been exploited in genomics of both prokaryotic and eukaryotic plant pathogens. They are also proving to be useful in diagnostics, especially with respect to viruses. Copyright © 2011 by Annual Reviews. All rights reserved.

  6. The beauty of being (label)-free: sample preparation methods for SWATH-MS and next-generation targeted proteomics

    Science.gov (United States)

    Campbell, Kate; Deery, Michael J.; Lilley, Kathryn S.; Ralser, Markus

    2014-01-01

    The combination of qualitative analysis with label-free quantification has greatly facilitated the throughput and flexibility of novel proteomic techniques. However, such methods rely heavily on robust and reproducible sample preparation procedures. Here, we benchmark a selection of in gel, on filter, and in solution digestion workflows for their application in label-free proteomics. Each procedure was associated with differing advantages and disadvantages. The in gel methods interrogated were cost effective, but were limited in throughput and digest efficiency. Filter-aided sample preparations facilitated reasonable processing times and yielded a balanced representation of membrane proteins, but led to a high signal variation in quantification experiments. Two in solution digest protocols, however, gave optimal performance for label-free proteomics. A protocol based on the detergent RapiGest led to the highest number of detected proteins at second-best signal stability, while a protocol based on acetonitrile-digestion, RapidACN, scored best in throughput and signal stability but came second in protein identification. In addition, we compared label-free data dependent (DDA) and data independent (SWATH) acquisition on a TripleTOF 5600 instrument. While largely similar in protein detection, SWATH outperformed DDA in quantification, reducing signal variation and markedly increasing the number of precisely quantified peptides. PMID:24741437

  7. MStern Blotting-High Throughput Polyvinylidene Fluoride (PVDF) Membrane-Based Proteomic Sample Preparation for 96-Well Plates.

    Science.gov (United States)

    Berger, Sebastian T; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno

    2015-10-01

    We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  8. High throughput screening method for assessing heterogeneity of microorganisms

    NARCIS (Netherlands)

    Ingham, C.J.; Sprenkels, A.J.; van Hylckama Vlieg, J.E.T.; Bomer, Johan G.; de Vos, W.M.; van den Berg, Albert

    2006-01-01

    The invention relates to the field of microbiology. Provided is a method which is particularly powerful for High Throughput Screening (HTS) purposes. More specific a high throughput method for determining heterogeneity or interactions of microorganisms is provided.

  9. A high-throughput readout architecture based on PCI-Express Gen3 and DirectGMA technology

    International Nuclear Information System (INIS)

    Rota, L.; Vogelgesang, M.; Perez, L.E. Ardila; Caselle, M.; Chilingaryan, S.; Dritschler, T.; Zilio, N.; Kopmann, A.; Balzer, M.; Weber, M.

    2016-01-01

    Modern physics experiments produce multi-GB/s data rates. Fast data links and high performance computing stages are required for continuous data acquisition and processing. Because of their intrinsic parallelism and computational power, GPUs emerged as an ideal solution to process this data in high performance computing applications. In this paper we present a high-throughput platform based on direct FPGA-GPU communication. The architecture consists of a Direct Memory Access (DMA) engine compatible with the Xilinx PCI-Express core, a Linux driver for register access, and high- level software to manage direct memory transfers using AMD's DirectGMA technology. Measurements with a Gen3 x8 link show a throughput of 6.4 GB/s for transfers to GPU memory and 6.6 GB/s to system memory. We also assess the possibility of using the architecture in low latency systems: preliminary measurements show a round-trip latency as low as 1 μs for data transfers to system memory, while the additional latency introduced by OpenCL scheduling is the current limitation for GPU based systems. Our implementation is suitable for real-time DAQ system applications ranging from photon science and medical imaging to High Energy Physics (HEP) systems

  10. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data.

    Science.gov (United States)

    Ching, Travers; Zhu, Xun; Garmire, Lana X

    2018-04-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.

  11. Proteome stability analysis of snap frozen, RNAlater preserved, and formalin-fixed paraffin-embedded human colon mucosal biopsies

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Kastaniegaard, Kenneth; Padurariu, Simona

    2016-01-01

    Large repositories of well characterized RNAlater preserved samples and formalin-fixed, paraffin-embedded samples have been generated worldwide. However, the impact on the proteome of the preservation methods remain poorly described. Therefore, we analyzed the impact on the proteome of preserving...... throughput gel free quantitative proteomics. The MS proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PRIDE: PXD002029....

  12. Proteomic Profiling of Mitochondrial Enzymes during Skeletal Muscle Aging

    Directory of Open Access Journals (Sweden)

    Lisa Staunton

    2011-01-01

    Full Text Available Mitochondria are of central importance for energy generation in skeletal muscles. Expression changes or functional alterations in mitochondrial enzymes play a key role during myogenesis, fibre maturation, and various neuromuscular pathologies, as well as natural fibre aging. Mass spectrometry-based proteomics suggests itself as a convenient large-scale and high-throughput approach to catalogue the mitochondrial protein complement and determine global changes during health and disease. This paper gives a brief overview of the relatively new field of mitochondrial proteomics and discusses the findings from recent proteomic surveys of mitochondrial elements in aged skeletal muscles. Changes in the abundance, biochemical activity, subcellular localization, and/or posttranslational modifications in key mitochondrial enzymes might be useful as novel biomarkers of aging. In the long term, this may advance diagnostic procedures, improve the monitoring of disease progression, help in the testing of side effects due to new drug regimes, and enhance our molecular understanding of age-related muscle degeneration.

  13. Injection of fully-defined signal mixtures: a novel high-throughput tool to study neuronal encoding and computations.

    Directory of Open Access Journals (Sweden)

    Vladimir Ilin

    Full Text Available Understanding of how neurons transform fluctuations of membrane potential, reflecting input activity, into spike responses, which communicate the ultimate results of single-neuron computation, is one of the central challenges for cellular and computational neuroscience. To study this transformation under controlled conditions, previous work has used a signal immersed in noise paradigm where neurons are injected with a current consisting of fluctuating noise that mimics on-going synaptic activity and a systematic signal whose transmission is studied. One limitation of this established paradigm is that it is designed to examine the encoding of only one signal under a specific, repeated condition. As a result, characterizing how encoding depends on neuronal properties, signal parameters, and the interaction of multiple inputs is cumbersome. Here we introduce a novel fully-defined signal mixture paradigm, which allows us to overcome these problems. In this paradigm, current for injection is synthetized as a sum of artificial postsynaptic currents (PSCs resulting from the activity of a large population of model presynaptic neurons. PSCs from any presynaptic neuron(s can be now considered as "signal", while the sum of all other inputs is considered as "noise". This allows us to study the encoding of a large number of different signals in a single experiment, thus dramatically increasing the throughput of data acquisition. Using this novel paradigm, we characterize the detection of excitatory and inhibitory PSCs from neuronal spike responses over a wide range of amplitudes and firing-rates. We show, that for moderately-sized neuronal populations the detectability of individual inputs is higher for excitatory than for inhibitory inputs during the 2-5 ms following PSC onset, but becomes comparable after 7-8 ms. This transient imbalance of sensitivity in favor of excitation may enhance propagation of balanced signals through neuronal networks. Finally, we

  14. A Customizable Flow Injection System for Automated, High Throughput, and Time Sensitive Ion Mobility Spectrometry and Mass Spectrometry Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Orton, Daniel J. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Tfaily, Malak M. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Moore, Ronald J. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; LaMarche, Brian L. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Zheng, Xueyun [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Fillmore, Thomas L. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Chu, Rosalie K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Weitz, Karl K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Monroe, Matthew E. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Kelly, Ryan T. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Smith, Richard D. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Baker, Erin S. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States

    2017-12-13

    To better understand disease conditions and environmental perturbations, multi-omic studies (i.e. proteomic, lipidomic, metabolomic, etc. analyses) are vastly increasing in popularity. In a multi-omic study, a single sample is typically extracted in multiple ways and numerous analyses are performed using different instruments. Thus, one sample becomes many analyses, making high throughput and reproducible evaluations a necessity. One way to address the numerous samples and varying instrumental conditions is to utilize a flow injection analysis (FIA) system for rapid sample injection. While some FIA systems have been created to address these challenges, many have limitations such as high consumable costs, low pressure capabilities, limited pressure monitoring and fixed flow rates. To address these limitations, we created an automated, customizable FIA system capable of operating at diverse flow rates (~50 nL/min to 500 µL/min) to accommodate low- and high-flow instrument sources. This system can also operate at varying analytical throughputs from 24 to 1200 samples per day to enable different MS analysis approaches. Applications ranging from native protein analyses to molecular library construction were performed using the FIA system. The results from these studies showed a highly robust platform, providing consistent performance over many days without carryover as long as washing buffers specific to each molecular analysis were utilized.

  15. High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)

    Science.gov (United States)

    High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...

  16. MASCP Gator: An overview of the Arabidopsis proteomic aggregation portal

    Directory of Open Access Journals (Sweden)

    Gregory W Mann

    2013-10-01

    Full Text Available A key challenge in the area of bioinformatics in the coming decades is the ability to manage the wealth of information that is being generated from the variety of high throughput methodologies currently being undertaken in laboratories across the world. While these approaches have made available large volumes of data to the research community, less attention has been given to the problem of how to intuitively present the data to enable greater biological insights. Recently, an attempt was made to tackle this problem in the area of Arabidopsis proteomics. The model plant has been the target of countless proteomics surveys producing an exhaustive array of data and online repositories. The MASCP Gator is an aggregation portal for proteomic data currently being produced by the community and unites a large collection of specialized resources to a single portal (http://gator.masc-proteomics.org/. Here we describe the latest additions, upgrades and features to this resource further expanding its role into protein modifications and genome sequence variations.

  17. A high-quality catalog of the Drosophila melanogaster proteome

    DEFF Research Database (Denmark)

    Brunner, Erich; Ahrens, Christian H.; Mohanty, Sonaly

    2007-01-01

    % of the predicted Drosophila melanogaster proteome by detecting 9,124 proteins from 498,000 redundant and 72,281 distinct peptide identifications. This unprecedented high proteome coverage for a complex eukaryote was achieved by combining sample diversity, multidimensional biochemical fractionation and analysis...

  18. Predicting co-complexed protein pairs using genomic and proteomic data integration

    Directory of Open Access Journals (Sweden)

    King Oliver D

    2004-04-01

    Full Text Available Abstract Background Identifying all protein-protein interactions in an organism is a major objective of proteomics. A related goal is to know which protein pairs are present in the same protein complex. High-throughput methods such as yeast two-hybrid (Y2H and affinity purification coupled with mass spectrometry (APMS have been used to detect interacting proteins on a genomic scale. However, both Y2H and APMS methods have substantial false-positive rates. Aside from high-throughput interaction screens, other gene- or protein-pair characteristics may also be informative of physical interaction. Therefore it is desirable to integrate multiple datasets and utilize their different predictive value for more accurate prediction of co-complexed relationship. Results Using a supervised machine learning approach – probabilistic decision tree, we integrated high-throughput protein interaction datasets and other gene- and protein-pair characteristics to predict co-complexed pairs (CCP of proteins. Our predictions proved more sensitive and specific than predictions based on Y2H or APMS methods alone or in combination. Among the top predictions not annotated as CCPs in our reference set (obtained from the MIPS complex catalogue, a significant fraction was found to physically interact according to a separate database (YPD, Yeast Proteome Database, and the remaining predictions may potentially represent unknown CCPs. Conclusions We demonstrated that the probabilistic decision tree approach can be successfully used to predict co-complexed protein (CCP pairs from other characteristics. Our top-scoring CCP predictions provide testable hypotheses for experimental validation.

  19. Applications of ambient mass spectrometry in high-throughput screening.

    Science.gov (United States)

    Li, Li-Ping; Feng, Bao-Sheng; Yang, Jian-Wang; Chang, Cui-Lan; Bai, Yu; Liu, Hu-Wei

    2013-06-07

    The development of rapid screening and identification techniques is of great importance for drug discovery, doping control, forensic identification, food safety and quality control. Ambient mass spectrometry (AMS) allows rapid and direct analysis of various samples in open air with little sample preparation. Recently, its applications in high-throughput screening have been in rapid progress. During the past decade, various ambient ionization techniques have been developed and applied in high-throughput screening. This review discusses typical applications of AMS, including DESI (desorption electrospray ionization), DART (direct analysis in real time), EESI (extractive electrospray ionization), etc., in high-throughput screening (HTS).

  20. Quantitative, high-resolution proteomics for data-driven systems biology

    DEFF Research Database (Denmark)

    Cox, J.; Mann, M.

    2011-01-01

    Systems biology requires comprehensive data at all molecular levels. Mass spectrometry (MS)-based proteomics has emerged as a powerful and universal method for the global measurement of proteins. In the most widespread format, it uses liquid chromatography (LC) coupled to high-resolution tandem...... primary structure of proteins including posttranslational modifications, to localize proteins to organelles, and to determine protein interactions. Here, we describe the principles of analysis and the areas of biology where proteomics can make unique contributions. The large-scale nature of proteomics...... data and its high accuracy pose special opportunities as well as challenges in systems biology that have been largely untapped so far....

  1. Urinary Proteomics Pilot Study for Biomarker Discovery and Diagnosis in Heart Failure with Reduced Ejection Fraction

    DEFF Research Database (Denmark)

    Rossing, Kasper; Bosselmann, Helle Skovmand; Gustafsson, Finn

    2016-01-01

    and Results Urine samples were analyzed by on-line capillary electrophoresis coupled to electrospray ionization micro time-of-flight mass spectrometry (CE-MS) to generate individual urinary proteome profiles. In an initial biomarker discovery cohort, analysis of urinary proteome profiles from 33 HFr......Background Biomarker discovery and new insights into the pathophysiology of heart failure with reduced ejection fraction (HFrEF) may emerge from recent advances in high-throughput urinary proteomics. This could lead to improved diagnosis, risk stratification and management of HFrEF. Methods.......6%) in individuals with diastolic left ventricular dysfunction (N = 176). The HFrEF-related peptide biomarkers mainly included fragments of fibrillar type I and III collagen but also, e.g., of fibrinogen beta and alpha-1-antitrypsin. Conclusion CE-MS based urine proteome analysis served as a sensitive tool...

  2. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit.

    Science.gov (United States)

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-04-01

    Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.

  3. High-throughput screening (HTS) and modeling of the retinoid ...

    Science.gov (United States)

    Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system

  4. A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.

    Science.gov (United States)

    Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham

    2017-08-01

    Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.

  5. Fusarium graminearum and Its Interactions with Cereal Heads: Studies in the Proteomics Era

    Science.gov (United States)

    Yang, Fen; Jacobsen, Susanne; Jørgensen, Hans J. L.; Collinge, David B.; Svensson, Birte; Finnie, Christine

    2013-01-01

    The ascomycete fungal pathogen Fusarium graminearum (teleomorph stage: Gibberella zeae) is the causal agent of Fusarium head blight in wheat and barley. This disease leads to significant losses of crop yield, and especially quality through the contamination by diverse fungal mycotoxins, which constitute a significant threat to the health of humans and animals. In recent years, high-throughput proteomics, aiming at identifying a broad spectrum of proteins with a potential role in the pathogenicity and host resistance, has become a very useful tool in plant-fungus interaction research. In this review, we describe the progress in proteomics applications toward a better understanding of F. graminearum pathogenesis, virulence, and host defense mechanisms. The contribution of proteomics to the development of crop protection strategies against this pathogen is also discussed briefly. PMID:23450732

  6. Fusarium graminearum and its interactions with cereal heads: studies in the proteomics era

    Directory of Open Access Journals (Sweden)

    Fen eYang

    2013-02-01

    Full Text Available The ascomycete fungal pathogen Fusarium graminearum is the causal agent of Fusarium head blight (FHB in wheat and barley. This disease leads to significant losses of crop yield, and especially quality through the contamination by diverse fungal mycotoxins, which constitute a significant threat to the health of humans and animals. In recent years, high-throughput proteomics, aiming at identifying a broad spectrum of proteins with a potential role in the pathogenicity and host resistance, has become a very useful tool in plant-fungus interaction research. In this review, we describe the progress in proteomics applications towards a better understanding of Fusarium graminearum pathogenesis, virulence and host defence mechanisms. The contribution of proteomics to the development of crop protection strategies against this pathogen is also discussed briefly.

  7. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  8. The Scottish Structural Proteomics Facility: targets, methods and outputs

    DEFF Research Database (Denmark)

    Oke, Muse; Carter, Lester G; Johnson, Kenneth A

    2010-01-01

    The Scottish Structural Proteomics Facility was funded to develop a laboratory scale approach to high throughput structure determination. The effort was successful in that over 40 structures were determined. These structures and the methods harnessed to obtain them are reported here. This report...... reflects on the value of automation but also on the continued requirement for a high degree of scientific and technical expertise. The efficiency of the process poses challenges to the current paradigm of structural analysis and publication. In the 5 year period we published ten peer-reviewed papers...

  9. Mass Spectrometry for Translational Proteomics: Progress and Clinical Implications

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Erin Shammel; Liu, Tao; Petyuk, Vladislav A.; Burnum-Johnson, Kristin E.; Ibrahim, Yehia M.; Anderson, Gordon A.; Smith, Richard D.

    2012-08-31

    Mass spectrometry (MS)-based proteomics measurements have become increasingly utilized in a wide range of biological and biomedical applications, and have significantly enhanced the understanding of the complex and dynamic nature of the proteome and its connections to biology and diseases. While some MS techniques such as those for targeted analysis are increasingly applied with great success, others such as global quantitative analysis (for e.g. biomarker discovery) are more challenging and continue to be developed and refined to provide the desired throughput, sensitivity and/ or specificity. New MS capabilities and proteomics-based pipelines/strategies also keep enhancing for the advancement of clinical proteomics applications such as protein biomarker discovery and validation. Herein, we provide a brief review to summarize the current state of MS-based proteomics with respect to its advantages and present limitations, while highlighting its potential in future clinical applications.

  10. Plant Abiotic Stress Proteomics: The Major Factors Determining Alterations in Cellular Proteome

    Science.gov (United States)

    Kosová, Klára; Vítámvás, Pavel; Urban, Milan O.; Prášil, Ilja T.; Renaut, Jenny

    2018-01-01

    HIGHLIGHTS: Major environmental and genetic factors determining stress-related protein abundance are discussed.Major aspects of protein biological function including protein isoforms and PTMs, cellular localization and protein interactions are discussed.Functional diversity of protein isoforms and PTMs is discussed. Abiotic stresses reveal profound impacts on plant proteomes including alterations in protein relative abundance, cellular localization, post-transcriptional and post-translational modifications (PTMs), protein interactions with other protein partners, and, finally, protein biological functions. The main aim of the present review is to discuss the major factors determining stress-related protein accumulation and their final biological functions. A dynamics of stress response including stress acclimation to altered ambient conditions and recovery after the stress treatment is discussed. The results of proteomic studies aimed at a comparison of stress response in plant genotypes differing in stress adaptability reveal constitutively enhanced levels of several stress-related proteins (protective proteins, chaperones, ROS scavenging- and detoxification-related enzymes) in the tolerant genotypes with respect to the susceptible ones. Tolerant genotypes can efficiently adjust energy metabolism to enhanced needs during stress acclimation. Stress tolerance vs. stress susceptibility are relative terms which can reflect different stress-coping strategies depending on the given stress treatment. The role of differential protein isoforms and PTMs with respect to their biological functions in different physiological constraints (cellular compartments and interacting partners) is discussed. The importance of protein functional studies following high-throughput proteome analyses is presented in a broader context of plant biology. In summary, the manuscript tries to provide an overview of the major factors which have to be considered when interpreting data from proteomic

  11. Maize-Pathogen Interactions: An Ongoing Combat from a Proteomics Perspective

    Directory of Open Access Journals (Sweden)

    Olga Pechanova

    2015-11-01

    Full Text Available Maize (Zea mays L. is a host to numerous pathogenic species that impose serious diseases to its ear and foliage, negatively affecting the yield and the quality of the maize crop. A considerable amount of research has been carried out to elucidate mechanisms of maize-pathogen interactions with a major goal to identify defense-associated proteins. In this review, we summarize interactions of maize with its agriculturally important pathogens that were assessed at the proteome level. Employing differential analyses, such as the comparison of pathogen-resistant and susceptible maize varieties, as well as changes in maize proteomes after pathogen challenge, numerous proteins were identified as possible candidates in maize resistance. We describe findings of various research groups that used mainly mass spectrometry-based, high through-put proteomic tools to investigate maize interactions with fungal pathogens Aspergillus flavus, Fusarium spp., and Curvularia lunata, and viral agents Rice Black-streaked Dwarf Virus and Sugarcane Mosaic Virus.

  12. Maize-Pathogen Interactions: An Ongoing Combat from a Proteomics Perspective.

    Science.gov (United States)

    Pechanova, Olga; Pechan, Tibor

    2015-11-30

    Maize (Zea mays L.) is a host to numerous pathogenic species that impose serious diseases to its ear and foliage, negatively affecting the yield and the quality of the maize crop. A considerable amount of research has been carried out to elucidate mechanisms of maize-pathogen interactions with a major goal to identify defense-associated proteins. In this review, we summarize interactions of maize with its agriculturally important pathogens that were assessed at the proteome level. Employing differential analyses, such as the comparison of pathogen-resistant and susceptible maize varieties, as well as changes in maize proteomes after pathogen challenge, numerous proteins were identified as possible candidates in maize resistance. We describe findings of various research groups that used mainly mass spectrometry-based, high through-put proteomic tools to investigate maize interactions with fungal pathogens Aspergillus flavus, Fusarium spp., and Curvularia lunata, and viral agents Rice Black-streaked Dwarf Virus and Sugarcane Mosaic Virus.

  13. Large-scale DNA Barcode Library Generation for Biomolecule Identification in High-throughput Screens.

    Science.gov (United States)

    Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio

    2017-10-24

    High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.

  14. High-throughput scoring of seed germination

    NARCIS (Netherlands)

    Ligterink, Wilco; Hilhorst, Henk W.M.

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very

  15. Recent advances on multidimensional liquid chromatography–mass spectrometry for proteomics: From qualitative to quantitative analysis—A review

    International Nuclear Information System (INIS)

    Wu Qi; Yuan Huiming; Zhang Lihua; Zhang Yukui

    2012-01-01

    Highlights: ► We discuss progress of MDLC–MS systems in qualitative and quantitative proteomics. ► Both “Top-down” and “bottom-up” strategies are discussed in detail. ► On-line integrations of stable isotope labeling process are highlighted. ► This review gives insights into further directions for higher level integration. - Abstract: With the acceleration of proteome research, increasing attention has been paid to multidimensional liquid chromatography–mass spectrometry (MDLC–MS) due to its high peak capacity and separation efficiency. Recently, many efforts have been put to improve MDLC-based strategies including “top-down” and “bottom-up” to enable highly sensitive qualitative and quantitative analysis of proteins, as well as accelerate the whole analytical procedure. Integrated platforms with combination of sample pretreatment, multidimensional separations and identification were also developed to achieve high throughput and sensitive detection of proteomes, facilitating highly accurate and reproducible quantification. This review summarized the recent advances of such techniques and their applications in qualitative and quantitative analysis of proteomes.

  16. Transcriptome and quantitative proteome analysis reveals molecular processes associated with larval metamorphosis in the polychaete pseudopolydora vexillosa

    KAUST Repository

    Chandramouli, Kondethimmanahalli

    2013-03-01

    Larval growth of the polychaete worm Pseudopolydora vexillosa involves the formation of segment-specific structures. When larvae attain competency to settle, they discard swimming chaetae and secrete mucus. The larvae build tubes around themselves and metamorphose into benthic juveniles. Understanding the molecular processes, which regulate this complex and unique transition, remains a major challenge because of the limited molecular information available. To improve this situation, we conducted high-throughput RNA sequencing and quantitative proteome analysis of the larval stages of P. vexillosa. Based on gene ontology (GO) analysis, transcripts related to cellular and metabolic processes, binding, and catalytic activities were highly represented during larval-adult transition. Mitogen-activated protein kinase (MAPK), calcium-signaling, Wnt/β-catenin, and notch signaling metabolic pathways were enriched in transcriptome data. Quantitative proteomics identified 107 differentially expressed proteins in three distinct larval stages. Fourteen and 53 proteins exhibited specific differential expression during competency and metamorphosis, respectively. Dramatic up-regulation of proteins involved in signaling, metabolism, and cytoskeleton functions were found during the larval-juvenile transition. Several proteins involved in cell signaling, cytoskeleton and metabolism were up-regulated, whereas proteins related to transcription and oxidative phosphorylation were down-regulated during competency. The integration of high-throughput RNA sequencing and quantitative proteomics allowed a global scale analysis of larval transcripts/proteins associated molecular processes in the metamorphosis of polychaete worms. Further, transcriptomic and proteomic insights provide a new direction to understand the fundamental mechanisms that regulate larval metamorphosis in polychaetes. © 2013 American Chemical Society.

  17. Transcriptome and quantitative proteome analysis reveals molecular processes associated with larval metamorphosis in the polychaete pseudopolydora vexillosa

    KAUST Repository

    Chandramouli, Kondethimmanahalli; Sun, Jin; Mok, FloraSy; Liu, Lingli; Qiu, Jianwen; Ravasi, Timothy; Qian, Peiyuan

    2013-01-01

    Larval growth of the polychaete worm Pseudopolydora vexillosa involves the formation of segment-specific structures. When larvae attain competency to settle, they discard swimming chaetae and secrete mucus. The larvae build tubes around themselves and metamorphose into benthic juveniles. Understanding the molecular processes, which regulate this complex and unique transition, remains a major challenge because of the limited molecular information available. To improve this situation, we conducted high-throughput RNA sequencing and quantitative proteome analysis of the larval stages of P. vexillosa. Based on gene ontology (GO) analysis, transcripts related to cellular and metabolic processes, binding, and catalytic activities were highly represented during larval-adult transition. Mitogen-activated protein kinase (MAPK), calcium-signaling, Wnt/β-catenin, and notch signaling metabolic pathways were enriched in transcriptome data. Quantitative proteomics identified 107 differentially expressed proteins in three distinct larval stages. Fourteen and 53 proteins exhibited specific differential expression during competency and metamorphosis, respectively. Dramatic up-regulation of proteins involved in signaling, metabolism, and cytoskeleton functions were found during the larval-juvenile transition. Several proteins involved in cell signaling, cytoskeleton and metabolism were up-regulated, whereas proteins related to transcription and oxidative phosphorylation were down-regulated during competency. The integration of high-throughput RNA sequencing and quantitative proteomics allowed a global scale analysis of larval transcripts/proteins associated molecular processes in the metamorphosis of polychaete worms. Further, transcriptomic and proteomic insights provide a new direction to understand the fundamental mechanisms that regulate larval metamorphosis in polychaetes. © 2013 American Chemical Society.

  18. Optimization and high-throughput screening of antimicrobial peptides.

    Science.gov (United States)

    Blondelle, Sylvie E; Lohner, Karl

    2010-01-01

    While a well-established process for lead compound discovery in for-profit companies, high-throughput screening is becoming more popular in basic and applied research settings in academia. The development of combinatorial libraries combined with easy and less expensive access to new technologies have greatly contributed to the implementation of high-throughput screening in academic laboratories. While such techniques were earlier applied to simple assays involving single targets or based on binding affinity, they have now been extended to more complex systems such as whole cell-based assays. In particular, the urgent need for new antimicrobial compounds that would overcome the rapid rise of drug-resistant microorganisms, where multiple target assays or cell-based assays are often required, has forced scientists to focus onto high-throughput technologies. Based on their existence in natural host defense systems and their different mode of action relative to commercial antibiotics, antimicrobial peptides represent a new hope in discovering novel antibiotics against multi-resistant bacteria. The ease of generating peptide libraries in different formats has allowed a rapid adaptation of high-throughput assays to the search for novel antimicrobial peptides. Similarly, the availability nowadays of high-quantity and high-quality antimicrobial peptide data has permitted the development of predictive algorithms to facilitate the optimization process. This review summarizes the various library formats that lead to de novo antimicrobial peptide sequences as well as the latest structural knowledge and optimization processes aimed at improving the peptides selectivity.

  19. 20180311 - High Throughput Transcriptomics: From screening to pathways (SOT 2018)

    Science.gov (United States)

    The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...

  20. High throughput label-free platform for statistical bio-molecular sensing

    DEFF Research Database (Denmark)

    Bosco, Filippo; Hwu, En-Te; Chen, Ching-Hsiu

    2011-01-01

    Sensors are crucial in many daily operations including security, environmental control, human diagnostics and patient monitoring. Screening and online monitoring require reliable and high-throughput sensing. We report on the demonstration of a high-throughput label-free sensor platform utilizing...

  1. Fractal-like Distributions over the Rational Numbers in High-throughput Biological and Clinical Data

    Science.gov (United States)

    Trifonov, Vladimir; Pasqualucci, Laura; Dalla-Favera, Riccardo; Rabadan, Raul

    2011-12-01

    Recent developments in extracting and processing biological and clinical data are allowing quantitative approaches to studying living systems. High-throughput sequencing (HTS), expression profiles, proteomics, and electronic health records (EHR) are some examples of such technologies. Extracting meaningful information from those technologies requires careful analysis of the large volumes of data they produce. In this note, we present a set of fractal-like distributions that commonly appear in the analysis of such data. The first set of examples are drawn from a HTS experiment. Here, the distributions appear as part of the evaluation of the error rate of the sequencing and the identification of tumorogenic genomic alterations. The other examples are obtained from risk factor evaluation and analysis of relative disease prevalence and co-mordbidity as these appear in EHR. The distributions are also relevant to identification of subclonal populations in tumors and the study of quasi-species and intrahost diversity of viral populations.

  2. Development of a high-throughput method for the systematic identification of human proteins nuclear translocation potential

    Directory of Open Access Journals (Sweden)

    Kawai Jun

    2009-09-01

    Full Text Available Abstract Background Important clues to the function of novel and uncharacterized proteins can be obtained by identifying their ability to translocate in the nucleus. In addition, a comprehensive definition of the nuclear proteome undoubtedly represents a key step toward a better understanding of the biology of this organelle. Although several high-throughput experimental methods have been developed to explore the sub-cellular localization of proteins, these methods tend to focus on the predominant localizations of gene products and may fail to provide a complete catalog of proteins that are able to transiently locate into the nucleus. Results We have developed a method for examining the nuclear localization potential of human gene products at the proteome scale by adapting a mammalian two-hybrid system we have previously developed. Our system is composed of three constructs co-transfected into a mammalian cell line. First, it contains a PCR construct encoding a fusion protein composed of a tested protein, the PDZ-protein TIP-1, and the transactivation domain of TNNC2 (referred to as ACT construct. Second, our system contains a PCR construct encoding a fusion protein composed of the DNA binding domain of GAL4 and the PDZ binding domain of rhotekin (referred to as the BIND construct. Third, a GAL4-responsive luciferase reporter is used to detect the reconstitution of a transcriptionally active BIND-ACT complex through the interaction of TIP-1 and rhotekin, which indicates the ability of the tested protein to translocate into the nucleus. We validated our method in a small-scale feasibility study by comparing it to green fluorescent protein (GFP fusion-based sub-cellular localization assays, sequence-based computational prediction of protein sub-cellular localization, and current sub-cellular localization data available from the literature for 22 gene products. Conclusion Our reporter-based system can rapidly screen gene products for their ability

  3. High-throughput molecular analysis in lung cancer: insights into biology and potential clinical applications.

    Science.gov (United States)

    Ocak, S; Sos, M L; Thomas, R K; Massion, P P

    2009-08-01

    During the last decade, high-throughput technologies including genomic, epigenomic, transcriptomic and proteomic have been applied to further our understanding of the molecular pathogenesis of this heterogeneous disease, and to develop strategies that aim to improve the management of patients with lung cancer. Ultimately, these approaches should lead to sensitive, specific and noninvasive methods for early diagnosis, and facilitate the prediction of response to therapy and outcome, as well as the identification of potential novel therapeutic targets. Genomic studies were the first to move this field forward by providing novel insights into the molecular biology of lung cancer and by generating candidate biomarkers of disease progression. Lung carcinogenesis is driven by genetic and epigenetic alterations that cause aberrant gene function; however, the challenge remains to pinpoint the key regulatory control mechanisms and to distinguish driver from passenger alterations that may have a small but additive effect on cancer development. Epigenetic regulation by DNA methylation and histone modifications modulate chromatin structure and, in turn, either activate or silence gene expression. Proteomic approaches critically complement these molecular studies, as the phenotype of a cancer cell is determined by proteins and cannot be predicted by genomics or transcriptomics alone. The present article focuses on the technological platforms available and some proposed clinical applications. We illustrate herein how the "-omics" have revolutionised our approach to lung cancer biology and hold promise for personalised management of lung cancer.

  4. Integrative Genomics: Quantifying significance of phenotype-genotype relationships from multiple sources of high-throughput data

    Directory of Open Access Journals (Sweden)

    Eric eGamazon

    2013-05-01

    Full Text Available Given recent advances in the generation of high-throughput data such as whole genome genetic variation and transcriptome expression, it is critical to come up with novel methods to integrate these heterogeneous datasets and to assess the significance of identified phenotype-genotype relationships. Recent studies show that genome-wide association findings are likely to fall in loci with gene regulatory effects such as expression quantitative trait loci (eQTLs, demonstrating the utility of such integrative approaches. When genotype and gene expression data are available on the same individuals, we developed methods wherein top phenotype-associated genetic variants are prioritized if they are associated, as eQTLs, with gene expression traits that are themselves associated with the phenotype. Yet there has been no method to determine an overall p-value for the findings that arise specifically from the integrative nature of the approach. We propose a computationally feasible permutation method that accounts for the assimilative nature of the method and the correlation structure among gene expression traits and among genotypes. We apply the method to data from a study of cellular sensitivity to etoposide, one of the most widely used chemotherapeutic drugs. To our knowledge, this study is the first statistically sound quantification of the significance of the genotype-phenotype relationships resulting from applying an integrative approach. This method can be easily extended to cases in which gene expression data are replaced by other molecular phenotypes of interest, e.g., microRNA or proteomic data. This study has important implications for studies seeking to expand on genetic association studies by the use of omics data. Finally, we provide an R code to compute the empirical FDR when p-values for the observed and simulated phenotypes are available.

  5. Identification of redox-sensitive cysteines in the arabidopsis proteome using OxiTRAQ, a quantitative redox proteomics method

    KAUST Repository

    Liu, Pei

    2014-01-28

    Cellular redox status plays a key role in mediating various physiological and developmental processes often through modulating activities of redox-sensitive proteins. Various stresses trigger over-production of reactive oxygen/nitrogen species which lead to oxidative modifications of redox-sensitive proteins. Identification and characterization of redox-sensitive proteins are important steps toward understanding molecular mechanisms of stress responses. Here, we report a high-throughput quantitative proteomic approach termed OxiTRAQ for identifying proteins whose thiols undergo reversible oxidative modifications in Arabidopsis cells subjected to oxidative stress. In this approach, a biotinylated thiol-reactive reagent is used for differential labeling of reduced and oxidized thiols. The biotin-tagged peptides are affinity purified, labeled with iTRAQ reagents, and analyzed using a paralleled HCD-CID fragmentation mode in an LTQ-Orbitrap. With this approach, we identified 195 cysteine-containing peptides from 179 proteins whose thiols underwent oxidative modifications in Arabidopsis cells following the treatment with hydrogen peroxide. A majority of those redox-sensitive proteins, including several transcription factors, were not identified by previous redox proteomics studies. This approach allows identification of the specific redox-regulated cysteine residues, and offers an effective tool for elucidation of redox proteomes. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A flexible statistical model for alignment of label-free proteomics data--incorporating ion mobility and product ion information.

    Science.gov (United States)

    Benjamin, Ashlee M; Thompson, J Will; Soderblom, Erik J; Geromanos, Scott J; Henao, Ricardo; Kraus, Virginia B; Moseley, M Arthur; Lucas, Joseph E

    2013-12-16

    The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing--the matching of peptide measurements across samples. We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods.

  7. High throughput imaging cytometer with acoustic focussing.

    Science.gov (United States)

    Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter

    2015-10-31

    We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.

  8. Turbocharged molecular discovery of OLED emitters: from high-throughput quantum simulation to highly efficient TADF devices

    Science.gov (United States)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.

    2016-09-01

    Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.

  9. Contribution of MS-based proteomics to the understanding of Herpes Simplex Virus type 1 interaction with host cells

    Directory of Open Access Journals (Sweden)

    Enrique eSantamaría

    2012-03-01

    Full Text Available Like other DNA viruses, Herpes Simplex Virus type 1 (HSV-1 replicates and proliferates in host cells continuously modulating the host molecular environment. Following a sophisticated temporal expression pattern, HSV-1 encodes at least 89 multifunctional proteins that interplay with and modify the host cell proteome. During the last decade, advances in mass spectrometry applications coupled to the development of proteomic separation methods have allowed to partially monitor the impact of HSV-1 infection in human cells. In this review, we discuss the current use of different proteome fractionation strategies to define HSV-1 targets on two major application areas: i viral protein interactomics to decipher viral protein interactions in host cells and ii differential quantitative proteomics to analyse the virally induced changes in the cellular proteome. Moreover, we will also discuss the potential application of high throughput proteomic approaches to study global proteome dynamics and also post-translational modifications in HSV-1-infected cells, what will greatly improved our molecular knowledge of HSV-1 infection.

  10. Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)

    Science.gov (United States)

    High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...

  11. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  12. MStern Blotting–High Throughput Polyvinylidene Fluoride (PVDF) Membrane-Based Proteomic Sample Preparation for 96-Well Plates*

    OpenAIRE

    Berger, Sebastian T.; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno

    2015-01-01

    We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used mem...

  13. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  14. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    Science.gov (United States)

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  15. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  16. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning.

    Science.gov (United States)

    Pärnamaa, Tanel; Parts, Leopold

    2017-05-05

    High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy. Copyright © 2017 Parnamaa and Parts.

  17. A Customizable Flow Injection System for Automated, High Throughput, and Time Sensitive Ion Mobility Spectrometry and Mass Spectrometry Measurements.

    Science.gov (United States)

    Orton, Daniel J; Tfaily, Malak M; Moore, Ronald J; LaMarche, Brian L; Zheng, Xueyun; Fillmore, Thomas L; Chu, Rosalie K; Weitz, Karl K; Monroe, Matthew E; Kelly, Ryan T; Smith, Richard D; Baker, Erin S

    2018-01-02

    To better understand disease conditions and environmental perturbations, multiomic studies combining proteomic, lipidomic, and metabolomic analyses are vastly increasing in popularity. In a multiomic study, a single sample is typically extracted in multiple ways, and various analyses are performed using different instruments, most often based upon mass spectrometry (MS). Thus, one sample becomes many measurements, making high throughput and reproducible evaluations a necessity. One way to address the numerous samples and varying instrumental conditions is to utilize a flow injection analysis (FIA) system for rapid sample injections. While some FIA systems have been created to address these challenges, many have limitations such as costly consumables, low pressure capabilities, limited pressure monitoring, and fixed flow rates. To address these limitations, we created an automated, customizable FIA system capable of operating at a range of flow rates (∼50 nL/min to 500 μL/min) to accommodate both low- and high-flow MS ionization sources. This system also functions at varying analytical throughputs from 24 to 1200 samples per day to enable different MS analysis approaches. Applications ranging from native protein analyses to molecular library construction were performed using the FIA system, and results showed a highly robust and reproducible platform capable of providing consistent performance over many days without carryover, as long as washing buffers specific to each molecular analysis were utilized.

  18. High-throughput optical system for HDES hyperspectral imager

    Science.gov (United States)

    Václavík, Jan; Melich, Radek; Pintr, Pavel; Pleštil, Jan

    2015-01-01

    Affordable, long-wave infrared hyperspectral imaging calls for use of an uncooled FPA with high-throughput optics. This paper describes the design of the optical part of a stationary hyperspectral imager in a spectral range of 7-14 um with a field of view of 20°×10°. The imager employs a push-broom method made by a scanning mirror. High throughput and a demand for simplicity and rigidity led to a fully refractive design with highly aspheric surfaces and off-axis positioning of the detector array. The design was optimized to exploit the machinability of infrared materials by the SPDT method and a simple assemblage.

  19. Positional proteomics in the era of the human proteome project on the doorstep of precision medicine.

    Science.gov (United States)

    Eckhard, Ulrich; Marino, Giada; Butler, Georgina S; Overall, Christopher M

    2016-03-01

    Proteolytic processing is a pervasive and irreversible post-translational modification that expands the protein universe by generating new proteoforms (protein isoforms). Unlike signal peptide or prodomain removal, protease-generated proteoforms can rarely be predicted from gene sequences. Positional proteomic techniques that enrich for N- or C-terminal peptides from proteomes are indispensable for a comprehensive understanding of a protein's function in biological environments since protease cleavage frequently results in altered protein activity and localization. Proteases often process other proteases and protease inhibitors which perturbs proteolytic networks and potentiates the initial cleavage event to affect other molecular networks and cellular processes in physiological and pathological conditions. This review is aimed at researchers with a keen interest in state of the art systems level positional proteomic approaches that: (i) enable the study of complex protease-protease, protease-inhibitor and protease-substrate crosstalk and networks; (ii) allow the identification of proteolytic signatures as candidate disease biomarkers; and (iii) are expected to fill the Human Proteome Project missing proteins gap. We predict that these methodologies will be an integral part of emerging precision medicine initiatives that aim to customize healthcare, converting reactive medicine into a personalized and proactive approach, improving clinical care and maximizing patient health and wellbeing, while decreasing health costs by eliminating ineffective therapies, trial-and-error prescribing, and adverse drug effects. Such initiatives require quantitative and functional proteome profiling and dynamic disease biomarkers in addition to current pharmacogenomics approaches. With proteases at the pathogenic center of many diseases, high-throughput protein termini identification techniques such as TAILS (Terminal Amine Isotopic Labeling of Substrates) and COFRADIC (COmbined

  20. Proteome stability analysis of snap frozen, RNAlater preserved, and formalin-fixed paraffin-embedded human colon mucosal biopsies

    Directory of Open Access Journals (Sweden)

    Tue Bjerg Bennike

    2016-03-01

    Full Text Available Large repositories of well characterized RNAlater preserved samples and formalin-fixed, paraffin-embedded samples have been generated worldwide. However, the impact on the proteome of the preservation methods remain poorly described. Therefore, we analyzed the impact on the proteome of preserving samples in RNAlater, and by formalin-fixation, paraffin-embedding on human soft tissue, using directly frozen samples as a control (“Comparing the proteome of snap frozen, RNAlater preserved, and formalin-fixed paraffin-embedded human tissue samples” [1]. We here report the data from the analysis. The comparative analysis was performed on 24 colon mucosa biopsies, extracted from the sigmoideum of two gastroenterologically healthy participants for the purpose of this study. A set of biopsies were additionally stored for 30 min at room temperature prior to formalin-fixation. The samples were analyzed by high throughput gel free quantitative proteomics. The MS proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PRIDE: http://www.ebi.ac.uk/pride/archive/projects/PXD002029. Keywords: Human, Colon, Mucosa, RNAlater, FFPE, Snap-frozen, Stability, LC–MS, Proteomics

  1. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  2. TCP Throughput Profiles Using Measurements over Dedicated Connections

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Liu, Qiang [ORNL; Sen, Satyabrata [ORNL; Towsley, Don [University of Massachusetts, Amherst; Vardoyan, Gayane [University of Massachusetts, Amherst; Kettimuthu, R. [Argonne National Laboratory (ANL); Foster, Ian [University of Chicago

    2017-06-01

    Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, in stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.

  3. SAMNet: a network-based approach to integrate multi-dimensional high throughput datasets.

    Science.gov (United States)

    Gosline, Sara J C; Spencer, Sarah J; Ursu, Oana; Fraenkel, Ernest

    2012-11-01

    The rapid development of high throughput biotechnologies has led to an onslaught of data describing genetic perturbations and changes in mRNA and protein levels in the cell. Because each assay provides a one-dimensional snapshot of active signaling pathways, it has become desirable to perform multiple assays (e.g. mRNA expression and phospho-proteomics) to measure a single condition. However, as experiments expand to accommodate various cellular conditions, proper analysis and interpretation of these data have become more challenging. Here we introduce a novel approach called SAMNet, for Simultaneous Analysis of Multiple Networks, that is able to interpret diverse assays over multiple perturbations. The algorithm uses a constrained optimization approach to integrate mRNA expression data with upstream genes, selecting edges in the protein-protein interaction network that best explain the changes across all perturbations. The result is a putative set of protein interactions that succinctly summarizes the results from all experiments, highlighting the network elements unique to each perturbation. We evaluated SAMNet in both yeast and human datasets. The yeast dataset measured the cellular response to seven different transition metals, and the human dataset measured cellular changes in four different lung cancer models of Epithelial-Mesenchymal Transition (EMT), a crucial process in tumor metastasis. SAMNet was able to identify canonical yeast metal-processing genes unique to each commodity in the yeast dataset, as well as human genes such as β-catenin and TCF7L2/TCF4 that are required for EMT signaling but escaped detection in the mRNA and phospho-proteomic data. Moreover, SAMNet also highlighted drugs likely to modulate EMT, identifying a series of less canonical genes known to be affected by the BCR-ABL inhibitor imatinib (Gleevec), suggesting a possible influence of this drug on EMT.

  4. A high throughput architecture for a low complexity soft-output demapping algorithm

    Science.gov (United States)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  5. Analysis of high-throughput biological data using their rank values.

    Science.gov (United States)

    Dembélé, Doulaye

    2018-01-01

    High-throughput biological technologies are routinely used to generate gene expression profiling or cytogenetics data. To achieve high performance, methods available in the literature become more specialized and often require high computational resources. Here, we propose a new versatile method based on the data-ordering rank values. We use linear algebra, the Perron-Frobenius theorem and also extend a method presented earlier for searching differentially expressed genes for the detection of recurrent copy number aberration. A result derived from the proposed method is a one-sample Student's t-test based on rank values. The proposed method is to our knowledge the only that applies to gene expression profiling and to cytogenetics data sets. This new method is fast, deterministic, and requires a low computational load. Probabilities are associated with genes to allow a statistically significant subset selection in the data set. Stability scores are also introduced as quality parameters. The performance and comparative analyses were carried out using real data sets. The proposed method can be accessed through an R package available from the CRAN (Comprehensive R Archive Network) website: https://cran.r-project.org/web/packages/fcros .

  6. DESIGN OF LOW EPI AND HIGH THROUGHPUT CORDIC CELL TO IMPROVE THE PERFORMANCE OF MOBILE ROBOT

    Directory of Open Access Journals (Sweden)

    P. VELRAJKUMAR

    2014-04-01

    Full Text Available This paper mainly focuses on pass logic based design, which gives an low Energy Per Instruction (EPI and high throughput COrdinate Rotation Digital Computer (CORDIC cell for application of robotic exploration. The basic components of CORDIC cell namely register, multiplexer and proposed adder is designed using pass transistor logic (PTL design. The proposed adder is implemented in bit-parallel iterative CORDIC circuit whereas designed using DSCH2 VLSI CAD tool and their layouts are generated by Microwind 3 VLSI CAD tool. The propagation delay, area and power dissipation are calculated from the simulated results for proposed adder based CORDIC cell. The EPI, throughput and effect of temperature are calculated from generated layout. The output parameter of generated layout is analysed using BSIM4 advanced analyzer. The simulated result of the proposed adder based CORDIC circuit is compared with other adder based CORDIC circuits. From the analysis of these simulated results, it was found that the proposed adder based CORDIC circuit dissipates low power, gives faster response, low EPI and high throughput.

  7. High-Throughput Block Optical DNA Sequence Identification.

    Science.gov (United States)

    Sagar, Dodderi Manjunatha; Korshoj, Lee Erik; Hanson, Katrina Bethany; Chowdhury, Partha Pratim; Otoupal, Peter Britton; Chatterjee, Anushree; Nagpal, Prashant

    2018-01-01

    Optical techniques for molecular diagnostics or DNA sequencing generally rely on small molecule fluorescent labels, which utilize light with a wavelength of several hundred nanometers for detection. Developing a label-free optical DNA sequencing technique will require nanoscale focusing of light, a high-throughput and multiplexed identification method, and a data compression technique to rapidly identify sequences and analyze genomic heterogeneity for big datasets. Such a method should identify characteristic molecular vibrations using optical spectroscopy, especially in the "fingerprinting region" from ≈400-1400 cm -1 . Here, surface-enhanced Raman spectroscopy is used to demonstrate label-free identification of DNA nucleobases with multiplexed 3D plasmonic nanofocusing. While nanometer-scale mode volumes prevent identification of single nucleobases within a DNA sequence, the block optical technique can identify A, T, G, and C content in DNA k-mers. The content of each nucleotide in a DNA block can be a unique and high-throughput method for identifying sequences, genes, and other biomarkers as an alternative to single-letter sequencing. Additionally, coupling two complementary vibrational spectroscopy techniques (infrared and Raman) can improve block characterization. These results pave the way for developing a novel, high-throughput block optical sequencing method with lossy genomic data compression using k-mer identification from multiplexed optical data acquisition. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. The Urine Proteome as a Biomarker of Radiation Injury: Submitted to Proteomics- Clinical Applications Special Issue: "Renal and Urinary Proteomics (Thongboonkerd)"

    Science.gov (United States)

    Sharma, Mukut; Halligan, Brian D; Wakim, Bassam T; Savin, Virginia J; Cohen, Eric P; Moulder, John E

    2008-06-18

    Terrorist attacks or nuclear accidents could expose large numbers of people to ionizing radiation, and early biomarkers of radiation injury would be critical for triage, treatment and follow-up of such individuals. However, no such biomarkers have yet been proven to exist. We tested the potential of high throughput proteomics to identify protein biomarkers of radiation injury after total body X-ray irradiation in a rat model. Subtle functional changes in the kidney are suggested by an increased glomerular permeability for macromolecules measured within 24 hours after TBI. Ultrastructural changes in glomerular podocytes include partial loss of the interdigitating organization of foot processes. Analysis of urine by LC-MS/MS and 2D-GE showed significant changes in the urine proteome within 24 hours after TBI. Tissue kallikrein 1-related peptidase, cysteine proteinase inhibitor cystatin C and oxidized histidine were found to be increased while a number of proteinase inhibitors including kallikrein-binding protein and albumin were found to be decreased post-irradiation. Thus, TBI causes immediately detectable changes in renal structure and function and in the urinary protein profile. This suggests that both systemic and renal changes are induced by radiation and it may be possible to identify a set of biomarkers unique to radiation injury.

  9. A high-throughput screening approach to discovering good forms of biologically inspired visual representation.

    Science.gov (United States)

    Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D

    2009-11-01

    While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.

  10. A high-throughput screening approach to discovering good forms of biologically inspired visual representation.

    Directory of Open Access Journals (Sweden)

    Nicolas Pinto

    2009-11-01

    Full Text Available While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor. In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.

  11. High-Throughput Scoring of Seed Germination.

    Science.gov (United States)

    Ligterink, Wilco; Hilhorst, Henk W M

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very informative as it lacks information about start, rate, and uniformity of germination, which are highly indicative of such traits as dormancy, stress tolerance, and seed longevity. The calculation of cumulative germination curves requires information about germination percentage at various time points. We developed the GERMINATOR package: a simple, highly cost-efficient, and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The GERMINATOR package contains three modules: (I) design of experimental setup with various options to replicate and randomize samples; (II) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (III) curve fitting of cumulative germination data and the extraction, recap, and visualization of the various germination parameters. GERMINATOR is a freely available package that allows the monitoring and analysis of several thousands of germination tests, several times a day by a single person.

  12. Fusarium graminearum and its interactions with cereal heads: studies in the proteomics era

    OpenAIRE

    Fen eYang; Fen eYang; Susanne eJacobsen; Hans J. L. Jørgensen; David B. Collinge; Birte eSvensson; Christine eFinnie

    2013-01-01

    The ascomycete fungal pathogen Fusarium graminearum is the causal agent of Fusarium head blight (FHB) in wheat and barley. This disease leads to significant losses of crop yield, and especially quality through the contamination by diverse fungal mycotoxins, which constitute a significant threat to the health of humans and animals. In recent years, high-throughput proteomics, aiming at identifying a broad spectrum of proteins with a potential role in the pathogenicity and host resistance, has ...

  13. High throughput salt separation from uranium deposits

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, S.W.; Park, K.M.; Kim, J.G.; Kim, I.T.; Park, S.B., E-mail: swkwon@kaeri.re.kr [Korea Atomic Energy Research Inst. (Korea, Republic of)

    2014-07-01

    It is very important to increase the throughput of the salt separation system owing to the high uranium content of spent nuclear fuel and high salt fraction of uranium dendrites in pyroprocessing. Multilayer porous crucible system was proposed to increase a throughput of the salt distiller in this study. An integrated sieve-crucible assembly was also investigated for the practical use of the porous crucible system. The salt evaporation behaviors were compared between the conventional nonporous crucible and the porous crucible. Two step weight reductions took place in the porous crucible, whereas the salt weight reduced only at high temperature by distillation in a nonporous crucible. The first weight reduction in the porous crucible was caused by the liquid salt penetrated out through the perforated crucible during the temperature elevation until the distillation temperature. Multilayer porous crucibles have a benefit to expand the evaporation surface area. (author)

  14. High-Throughput Computational Screening of the Metal Organic Framework Database for CH4/H2 Separations.

    Science.gov (United States)

    Altintas, Cigdem; Erucar, Ilknur; Keskin, Seda

    2018-01-31

    Metal organic frameworks (MOFs) have been considered as one of the most exciting porous materials discovered in the last decade. Large surface areas, high pore volumes, and tailorable pore sizes make MOFs highly promising in a variety of applications, mainly in gas separations. The number of MOFs has been increasing very rapidly, and experimental identification of materials exhibiting high gas separation potential is simply impractical. High-throughput computational screening studies in which thousands of MOFs are evaluated to identify the best candidates for target gas separation is crucial in directing experimental efforts to the most useful materials. In this work, we used molecular simulations to screen the most complete and recent collection of MOFs from the Cambridge Structural Database to unlock their CH 4 /H 2 separation performances. This is the first study in the literature, which examines the potential of all existing MOFs for adsorption-based CH 4 /H 2 separation. MOFs (4350) were ranked based on several adsorbent evaluation metrics including selectivity, working capacity, adsorbent performance score, sorbent selection parameter, and regenerability. A large number of MOFs were identified to have extraordinarily large CH 4 /H 2 selectivities compared to traditional adsorbents such as zeolites and activated carbons. We examined the relations between structural properties of MOFs such as pore sizes, porosities, and surface areas and their selectivities. Correlations between the heat of adsorption, adsorbility, metal type of MOFs, and selectivities were also studied. On the basis of these relations, a simple mathematical model that can predict the CH 4 /H 2 selectivity of MOFs was suggested, which will be very useful in guiding the design and development of new MOFs with extraordinarily high CH 4 /H 2 separation performances.

  15. High throughput, low set-up time reconfigurable linear feedback shift registers

    NARCIS (Netherlands)

    Nas, R.J.M.; Berkel, van C.H.

    2010-01-01

    This paper presents a hardware design for a scalable, high throughput, configurable LFSR. High throughput is achieved by producing L consecutive outputs per clock cycle with a clock cycle period that, for practical cases, increases only logarithmically with the block size L and the length of the

  16. High-throughput epitope identification for snakebite antivenom

    DEFF Research Database (Denmark)

    Engmark, Mikael; De Masi, Federico; Laustsen, Andreas Hougaard

    Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individua...... toxins from pit vipers (Crotalidae) using the ICP Crotalidae antivenom. Due to an abundance of snake venom metalloproteinases and phospholipase A2s in the venoms used for production of the investigated antivenom, this study focuses on these toxin families.......Insight into the epitopic recognition pattern for polyclonal antivenoms is a strong tool for accurate prediction of antivenom cross-reactivity and provides a basis for design of novel antivenoms. In this work, a high-throughput approach was applied to characterize linear epitopes in 966 individual...

  17. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  18. RAPID PROCESSING OF ARCHIVAL TISSUE SAMPLES FOR PROTEOMIC ANALYSIS USING PRESSURE-CYCLING TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Vinuth N. Puttamallesh1,2

    2017-06-01

    Full Text Available Advent of mass spectrometry based proteomics has revolutionized our ability to study proteins from biological specimen in a high-throughput manner. Unlike cell line based studies, biomedical research involving tissue specimen is often challenging due to limited sample availability. In addition, investigation of clinically relevant research questions often requires enormous amount of time for sample collection prospectively. Formalin fixed paraffin embedded (FFPE archived tissue samples are a rich source of tissue specimen for biomedical research. However, there are several challenges associated with analysing FFPE samples. Protein cross-linking and degradation of proteins particularly affects proteomic analysis. We demonstrate that barocycler that uses pressure-cycling technology enables efficient protein extraction and processing of small amounts of FFPE tissue samples for proteomic analysis. We identified 3,525 proteins from six 10µm esophageal squamous cell carcinoma (ESCC tissue sections. Barocycler allows efficient protein extraction and proteolytic digestion of proteins from FFPE tissue sections at par with conventional methods.

  19. Towards a high throughput droplet-based agglutination assay

    KAUST Repository

    Kodzius, Rimantas; Castro, David; Foulds, Ian G.

    2013-01-01

    This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.

  20. Towards a high throughput droplet-based agglutination assay

    KAUST Repository

    Kodzius, Rimantas

    2013-10-22

    This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.

  1. Reduced dimensionality (3,2)D NMR experiments and their automated analysis: implications to high-throughput structural studies on proteins.

    Science.gov (United States)

    Reddy, Jithender G; Kumar, Dinesh; Hosur, Ramakrishna V

    2015-02-01

    Protein NMR spectroscopy has expanded dramatically over the last decade into a powerful tool for the study of their structure, dynamics, and interactions. The primary requirement for all such investigations is sequence-specific resonance assignment. The demand now is to obtain this information as rapidly as possible and in all types of protein systems, stable/unstable, soluble/insoluble, small/big, structured/unstructured, and so on. In this context, we introduce here two reduced dimensionality experiments – (3,2)D-hNCOcanH and (3,2)D-hNcoCAnH – which enhance the previously described 2D NMR-based assignment methods quite significantly. Both the experiments can be recorded in just about 2-3 h each and hence would be of immense value for high-throughput structural proteomics and drug discovery research. The applicability of the method has been demonstrated using alpha-helical bovine apo calbindin-D9k P43M mutant (75 aa) protein. Automated assignment of this data using AUTOBA has been presented, which enhances the utility of these experiments. The backbone resonance assignments so derived are utilized to estimate secondary structures and the backbone fold using Web-based algorithms. Taken together, we believe that the method and the protocol proposed here can be used for routine high-throughput structural studies of proteins. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Data reduction for a high-throughput neutron activation analysis system

    International Nuclear Information System (INIS)

    Bowman, W.W.

    1979-01-01

    To analyze samples collected as part of a geochemical survey for the National Uranium Resource Evaluation program, Savannah River Laboratory has installed a high-throughput neutron activation analysis system. As part of that system, computer programs have been developed to reduce raw data to elemental concentrations in two steps. Program RAGS reduces gamma-ray spectra to lists of photopeak energies, peak areas, and statistical errors. Program RICHES determines the elemental concentrations from photopeak and delayed-neutron data, detector efficiencies, analysis parameters (neutron flux and activation, decay, and counting times), and spectrometric and cross-section data from libraries. Both programs have been streamlined for on-line operation with a minicomputer, each requiring approx. 64 kbytes of core. 3 tables

  3. iAB-RBC-283: A proteomically derived knowledge-base of erythrocyte metabolism that can be used to simulate its physiological and patho-physiological states.

    Science.gov (United States)

    Bordbar, Aarash; Jamshidi, Neema; Palsson, Bernhard O

    2011-07-12

    The development of high-throughput technologies capable of whole cell measurements of genes, proteins, and metabolites has led to the emergence of systems biology. Integrated analysis of the resulting omic data sets has proved to be hard to achieve. Metabolic network reconstructions enable complex relationships amongst molecular components to be represented formally in a biologically relevant manner while respecting physical constraints. In silico models derived from such reconstructions can then be queried or interrogated through mathematical simulations. Proteomic profiling studies of the mature human erythrocyte have shown more proteins present related to metabolic function than previously thought; however the significance and the causal consequences of these findings have not been explored. Erythrocyte proteomic data was used to reconstruct the most expansive description of erythrocyte metabolism to date, following extensive manual curation, assessment of the literature, and functional testing. The reconstruction contains 281 enzymes representing functions from glycolysis to cofactor and amino acid metabolism. Such a comprehensive view of erythrocyte metabolism implicates the erythrocyte as a potential biomarker for different diseases as well as a 'cell-based' drug-screening tool. The analysis shows that 94 erythrocyte enzymes are implicated in morbid single nucleotide polymorphisms, representing 142 pathologies. In addition, over 230 FDA-approved and experimental pharmaceuticals have enzymatic targets in the erythrocyte. The advancement of proteomic technologies and increased generation of high-throughput proteomic data have created the need for a means to analyze these data in a coherent manner. Network reconstructions provide a systematic means to integrate and analyze proteomic data in a biologically meaning manner. Analysis of the red cell proteome has revealed an unexpected level of complexity in the functional capabilities of human erythrocyte metabolism.

  4. XMRF: an R package to fit Markov Networks to high-throughput genetics data.

    Science.gov (United States)

    Wan, Ying-Wooi; Allen, Genevera I; Baker, Yulia; Yang, Eunho; Ravikumar, Pradeep; Anderson, Matthew; Liu, Zhandong

    2016-08-26

    Technological advances in medicine have led to a rapid proliferation of high-throughput "omics" data. Tools to mine this data and discover disrupted disease networks are needed as they hold the key to understanding complicated interactions between genes, mutations and aberrations, and epi-genetic markers. We developed an R software package, XMRF, that can be used to fit Markov Networks to various types of high-throughput genomics data. Encoding the models and estimation techniques of the recently proposed exponential family Markov Random Fields (Yang et al., 2012), our software can be used to learn genetic networks from RNA-sequencing data (counts via Poisson graphical models), mutation and copy number variation data (categorical via Ising models), and methylation data (continuous via Gaussian graphical models). XMRF is the only tool that allows network structure learning using the native distribution of the data instead of the standard Gaussian. Moreover, the parallelization feature of the implemented algorithms computes the large-scale biological networks efficiently. XMRF is available from CRAN and Github ( https://github.com/zhandong/XMRF ).

  5. Using ALFA for high throughput, distributed data transmission in the ALICE O2 system

    Science.gov (United States)

    Wegrzynek, A.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter (the Quark-Gluon Plasma at the CERN LHC (Large Hadron Collider). ALICE has been successfully collecting physics data in Run 2 since spring 2015. In parallel, preparations for a major upgrade of the computing system, called O2 (Online-Offline), scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements of the system is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. The ALFA framework will standardize and handle software related tasks such as readout, data transport, frame building, calibration, online reconstruction and more in the upgraded computing system. ALFA supports two data transport libraries: ZeroMQ and nanomsg. This paper discusses the efficiency of ALFA in terms of high throughput data transport. The tests were performed with multiple FLPs pushing data to multiple EPNs. The transfer was done using push-pull communication patterns and two socket configurations: bind, connect. The set of benchmarks was prepared to get the most performant results on each hardware setup. The paper presents the measurement process and final results - data throughput combined with computing resources usage as a function of block size. The high number of nodes and connections in the final set up may cause race conditions that can lead to uneven load balancing and poor scalability. The performed tests allow us to validate whether the traffic is distributed evenly over all receivers. It also measures the behaviour of

  6. High-throughput screening of small molecule libraries using SAMDI mass spectrometry.

    Science.gov (United States)

    Gurard-Levin, Zachary A; Scholle, Michael D; Eisenberg, Adam H; Mrksich, Milan

    2011-07-11

    High-throughput screening is a common strategy used to identify compounds that modulate biochemical activities, but many approaches depend on cumbersome fluorescent reporters or antibodies and often produce false-positive hits. The development of "label-free" assays addresses many of these limitations, but current approaches still lack the throughput needed for applications in drug discovery. This paper describes a high-throughput, label-free assay that combines self-assembled monolayers with mass spectrometry, in a technique called SAMDI, as a tool for screening libraries of 100,000 compounds in one day. This method is fast, has high discrimination, and is amenable to a broad range of chemical and biological applications.

  7. Updates on resources, software tools, and databases for plant proteomics in 2016-2017.

    Science.gov (United States)

    Misra, Biswapriya B

    2018-02-08

    Proteomics data processing, annotation, and analysis can often lead to major hurdles in large-scale high-throughput bottom-up proteomics experiments. Given the recent rise in protein-based big datasets being generated, efforts in in silico tool development occurrences have had an unprecedented increase; so much so, that it has become increasingly difficult to keep track of all the advances in a particular academic year. However, these tools benefit the plant proteomics community in circumventing critical issues in data analysis and visualization, as these continually developing open-source and community-developed tools hold potential in future research efforts. This review will aim to introduce and summarize more than 50 software tools, databases, and resources developed and published during 2016-2017 under the following categories: tools for data pre-processing and analysis, statistical analysis tools, peptide identification tools, databases and spectral libraries, and data visualization and interpretation tools. Intended for a well-informed proteomics community, finally, efforts in data archiving and validation datasets for the community will be discussed as well. Additionally, the author delineates the current and most commonly used proteomics tools in order to introduce novice readers to this -omics discovery platform. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Preliminary High-Throughput Metagenome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Dusheyko, Serge; Furman, Craig; Pangilinan, Jasmyn; Shapiro, Harris; Tu, Hank

    2007-03-26

    Metagenome data sets present a qualitatively different assembly problem than traditional single-organism whole-genome shotgun (WGS) assembly. The unique aspects of such projects include the presence of a potentially large number of distinct organisms and their representation in the data set at widely different fractions. In addition, multiple closely related strains could be present, which would be difficult to assemble separately. Failure to take these issues into account can result in poor assemblies that either jumble together different strains or which fail to yield useful results. The DOE Joint Genome Institute has sequenced a number of metagenomic projects and plans to considerably increase this number in the coming year. As a result, the JGI has a need for high-throughput tools and techniques for handling metagenome projects. We present the techniques developed to handle metagenome assemblies in a high-throughput environment. This includes a streamlined assembly wrapper, based on the JGI?s in-house WGS assembler, Jazz. It also includes the selection of sensible defaults targeted for metagenome data sets, as well as quality control automation for cleaning up the raw results. While analysis is ongoing, we will discuss preliminary assessments of the quality of the assembly results (http://fames.jgi-psf.org).

  9. High-throughput determination of RNA structure by proximity ligation.

    Science.gov (United States)

    Ramani, Vijay; Qiu, Ruolan; Shendure, Jay

    2015-09-01

    We present an unbiased method to globally resolve RNA structures through pairwise contact measurements between interacting regions. RNA proximity ligation (RPL) uses proximity ligation of native RNA followed by deep sequencing to yield chimeric reads with ligation junctions in the vicinity of structurally proximate bases. We apply RPL in both baker's yeast (Saccharomyces cerevisiae) and human cells and generate contact probability maps for ribosomal and other abundant RNAs, including yeast snoRNAs, the RNA subunit of the signal recognition particle and the yeast U2 spliceosomal RNA homolog. RPL measurements correlate with established secondary structures for these RNA molecules, including stem-loop structures and long-range pseudoknots. We anticipate that RPL will complement the current repertoire of computational and experimental approaches in enabling the high-throughput determination of secondary and tertiary RNA structures.

  10. Quantitative high-throughput profiling of snake venom gland transcriptomes and proteomes (Ovophis okinavensis and Protobothrops flavoviridis)

    Science.gov (United States)

    2013-01-01

    Background Advances in DNA sequencing and proteomics have facilitated quantitative comparisons of snake venom composition. Most studies have employed one approach or the other. Here, both Illumina cDNA sequencing and LC/MS were used to compare the transcriptomes and proteomes of two pit vipers, Protobothrops flavoviridis and Ovophis okinavensis, which differ greatly in their biology. Results Sequencing of venom gland cDNA produced 104,830 transcripts. The Protobothrops transcriptome contained transcripts for 103 venom-related proteins, while the Ovophis transcriptome contained 95. In both, transcript abundances spanned six orders of magnitude. Mass spectrometry identified peptides from 100% of transcripts that occurred at higher than contaminant (e.g. human keratin) levels, including a number of proteins never before sequenced from snakes. These transcriptomes reveal fundamentally different envenomation strategies. Adult Protobothrops venom promotes hemorrhage, hypotension, incoagulable blood, and prey digestion, consistent with mammalian predation. Ovophis venom composition is less readily interpreted, owing to insufficient pharmacological data for venom serine and metalloproteases, which comprise more than 97.3% of Ovophis transcripts, but only 38.0% of Protobothrops transcripts. Ovophis venom apparently represents a hybrid strategy optimized for frogs and small mammals. Conclusions This study illustrates the power of cDNA sequencing combined with MS profiling. The former quantifies transcript composition, allowing detection of novel proteins, but cannot indicate which proteins are actually secreted, as does MS. We show, for the first time, that transcript and peptide abundances are correlated. This means that MS can be used for quantitative, non-invasive venom profiling, which will be beneficial for studies of endangered species. PMID:24224955

  11. Mass Spectrometry–based Proteomic Profiling of Lung Cancer

    Science.gov (United States)

    Ocak, Sebahat; Chaurand, Pierre; Massion, Pierre P.

    2009-01-01

    In an effort to further our understanding of lung cancer biology and to identify new candidate biomarkers to be used in the management of lung cancer, we need to probe these tissues and biological fluids with tools that address the biology of lung cancer directly at the protein level. Proteins are responsible of the function and phenotype of cells. Cancer cells express proteins that distinguish them from normal cells. Proteomics is defined as the study of the proteome, the complete set of proteins produced by a species, using the technologies of large-scale protein separation and identification. As a result, new technologies are being developed to allow the rapid and systematic analysis of thousands of proteins. The analytical advantages of mass spectrometry (MS), including sensitivity and high-throughput, promise to make it a mainstay of novel biomarker discovery to differentiate cancer from normal cells and to predict individuals likely to develop or recur with lung cancer. In this review, we summarize the progress made in clinical proteomics as it applies to the management of lung cancer. We will focus our discussion on how MS approaches may advance the areas of early detection, response to therapy, and prognostic evaluation. PMID:19349484

  12. Continuing Development of Alternative High-Throughput Screens to Determine Endocrine Disruption, Focusing on Androgen Receptor, Steroidogenesis, and Thyroid Pathways

    Science.gov (United States)

    The focus of this meeting is the SAP's review and comment on the Agency's proposed high-throughput computational model of androgen receptor pathway activity as an alternative to the current Tier 1 androgen receptor assay (OCSPP 890.1150: Androgen Receptor Binding Rat Prostate Cyt...

  13. High-throughput transformation of Saccharomyces cerevisiae using liquid handling robots.

    Directory of Open Access Journals (Sweden)

    Guangbo Liu

    Full Text Available Saccharomyces cerevisiae (budding yeast is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids or genome mutation (e.g., gene mutation, deletion, epitope tagging is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast.

  14. Genecentric: a package to uncover graph-theoretic structure in high-throughput epistasis data.

    Science.gov (United States)

    Gallant, Andrew; Leiserson, Mark D M; Kachalov, Maxim; Cowen, Lenore J; Hescott, Benjamin J

    2013-01-18

    New technology has resulted in high-throughput screens for pairwise genetic interactions in yeast and other model organisms. For each pair in a collection of non-essential genes, an epistasis score is obtained, representing how much sicker (or healthier) the double-knockout organism will be compared to what would be expected from the sickness of the component single knockouts. Recent algorithmic work has identified graph-theoretic patterns in this data that can indicate functional modules, and even sets of genes that may occur in compensatory pathways, such as a BPM-type schema first introduced by Kelley and Ideker. However, to date, any algorithms for finding such patterns in the data were implemented internally, with no software being made publically available. Genecentric is a new package that implements a parallelized version of the Leiserson et al. algorithm (J Comput Biol 18:1399-1409, 2011) for generating generalized BPMs from high-throughput genetic interaction data. Given a matrix of weighted epistasis values for a set of double knock-outs, Genecentric returns a list of generalized BPMs that may represent compensatory pathways. Genecentric also has an extension, GenecentricGO, to query FuncAssociate (Bioinformatics 25:3043-3044, 2009) to retrieve GO enrichment statistics on generated BPMs. Python is the only dependency, and our web site provides working examples and documentation. We find that Genecentric can be used to find coherent functional and perhaps compensatory gene sets from high throughput genetic interaction data. Genecentric is made freely available for download under the GPLv2 from http://bcb.cs.tufts.edu/genecentric.

  15. Molecular system analysis, multidimensional, dynamic, ultra-sensitive exploration of proteomes

    International Nuclear Information System (INIS)

    Scharattenholz, A.; Soski, V.; Stegmann, W.; Schroer, K.; Godovac-Zimmermann, J.; Cabuk, A.; Pejovi, V.; Wozny, W.; Cahill, M.A.; Drukier, A.K.; Volkovitsky, P.

    2001-01-01

    ProteoSys AG's holistic proteomics strategy extends beyond classical proteome research as a new paradigm. Our concept of multidimensional molecular systems analysis of complex model systems employs the innovative ProteoDyn TM approach. This enables us to correlate dynamic changes of proteomes with their biophysical and biochemical environment. Our supersensitive Multi Photon Detection (MPD) technology enables ultra-sensitive detection of proteins, deep into the low abundance domain. Our technology platform includes the affinity analysis of phospho- and glyco-proteomes, and with our 'fish hook' methods we can capture and fully characterize even serpentine G-coupled receptors and associated proteins, including routine comprehensive post-translational analyses performed by a well equipped mass spectrometry group. Throughput and quality is obtained by automation and high end robotics, with data management handled by a dedicated bioinformatics department. Thus ProteoSys AG has a range of state of the art and proprietary tools at its disposal to analyse even the most difficult complex model systems. MPD is an isotopic detection method proprietary to ProteoSys For MPD analysis we have implemented protocols where over 99% of proteins can be iodinated, and where the iodinated proteins can be identified by mass spectrometry. Because MPD measures the energy of detected particles, it can discriminate between signals originating from different isotopes co-electrophoresed by 2D-PAGE. Thus MPD imagers have a 'multicolour' functionality suitable for differential display and improved throughput, eliminating inter-gel variations. Importantly, MPD opens up not only the world of detection of low abundance proteins, but also identification and characterization. Radioactive low abundance protein spots containing less than one attomole of protein can be excised from a 2D-gel, mixed with unlabelled proteins, and 'tracked' by MPD. The identity of the labeled protein is determined by

  16. A high-throughput multiplex method adapted for GMO detection.

    Science.gov (United States)

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  17. The Perseus computational platform for comprehensive analysis of (prote)omics data.

    Science.gov (United States)

    Tyanova, Stefka; Temu, Tikira; Sinitcyn, Pavel; Carlson, Arthur; Hein, Marco Y; Geiger, Tamar; Mann, Matthias; Cox, Jürgen

    2016-09-01

    A main bottleneck in proteomics is the downstream biological analysis of highly multivariate quantitative protein abundance data generated using mass-spectrometry-based analysis. We developed the Perseus software platform (http://www.perseus-framework.org) to support biological and biomedical researchers in interpreting protein quantification, interaction and post-translational modification data. Perseus contains a comprehensive portfolio of statistical tools for high-dimensional omics data analysis covering normalization, pattern recognition, time-series analysis, cross-omics comparisons and multiple-hypothesis testing. A machine learning module supports the classification and validation of patient groups for diagnosis and prognosis, and it also detects predictive protein signatures. Central to Perseus is a user-friendly, interactive workflow environment that provides complete documentation of computational methods used in a publication. All activities in Perseus are realized as plugins, and users can extend the software by programming their own, which can be shared through a plugin store. We anticipate that Perseus's arsenal of algorithms and its intuitive usability will empower interdisciplinary analysis of complex large data sets.

  18. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Ronald E., E-mail: rbell@pppl.gov [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2014-11-15

    A high-throughput spectrometer for the 400–820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm{sup −1} grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  19. Contribution of proteomics to the study of plant pathogenic fungi.

    Science.gov (United States)

    Gonzalez-Fernandez, Raquel; Jorrin-Novo, Jesus V

    2012-01-01

    Phytopathogenic fungi are one of the most damaging plant parasitic organisms, and can cause serious diseases and important yield losses in crops. The study of the biology of these microorganisms and the interaction with their hosts has experienced great advances in recent years due to the development of moderm, holistic and high-throughput -omic techniques, together with the increasing number of genome sequencing projects and the development of mutants and reverse genetics tools. We highlight among these -omic techniques the importance of proteomics, which has become a relevant tool in plant-fungus pathosystem research. Proteomics intends to identify gene products with a key role in pathogenicity and virulence. These studies would help in the search of key protein targets and in the development of agrochemicals, which may open new ways for crop disease diagnosis and protection. In this review, we made an overview on the contribution of proteomics to the knowledge of life cycle, infection mechanisms, and virulence of the plant pathogenic fungi. Data from current, innovative literature, according to both methodological and experimental systems, were summarized and discussed. Specific sections were devoted to the most studied fungal phytopathogens: Botrytis cinerea, Sclerotinia sclerotiorum, and Fusarium graminearum.

  20. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  1. A community proposal to integrate proteomics activities in ELIXIR.

    Science.gov (United States)

    Vizcaíno, Juan Antonio; Walzer, Mathias; Jiménez, Rafael C; Bittremieux, Wout; Bouyssié, David; Carapito, Christine; Corrales, Fernando; Ferro, Myriam; Heck, Albert J R; Horvatovich, Peter; Hubalek, Martin; Lane, Lydie; Laukens, Kris; Levander, Fredrik; Lisacek, Frederique; Novak, Petr; Palmblad, Magnus; Piovesan, Damiano; Pühler, Alfred; Schwämmle, Veit; Valkenborg, Dirk; van Rijswijk, Merlijn; Vondrasek, Jiri; Eisenacher, Martin; Martens, Lennart; Kohlbacher, Oliver

    2017-01-01

    Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on 'The Future of Proteomics in ELIXIR' that took place in March 2017 in Tübingen (Germany), and involved representatives of eleven ELIXIR nodes. These discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR's existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper.

  2. High Throughput Analysis of Photocatalytic Water Purification

    NARCIS (Netherlands)

    Sobral Romao, J.I.; Baiao Barata, David; Habibovic, Pamela; Mul, Guido; Baltrusaitis, Jonas

    2014-01-01

    We present a novel high throughput photocatalyst efficiency assessment method based on 96-well microplates and UV-Vis spectroscopy. We demonstrate the reproducibility of the method using methyl orange (MO) decomposition, and compare kinetic data obtained with those provided in the literature for

  3. Community and Proteomic Analysis of Anaerobic Consortia Converting Tetramethylammonium to Methane

    Directory of Open Access Journals (Sweden)

    Wei-Yu Chen

    2017-01-01

    Full Text Available Tetramethylammonium-degrading methanogenic consortia from a complete-mixing suspended sludge (CMSS and an upflow anaerobic sludge blanket (UASB reactors were studied using multiple PCR-based molecular techniques and shotgun proteomic approach. The prokaryotic 16S rRNA genes of the consortia were analyzed by quantitative PCR, high-throughput sequencing, and DGGE-cloning methods. The results showed that methanogenic archaea were highly predominant in both reactors but differed markedly according to community structure. Community and proteomic analysis revealed that Methanomethylovorans and Methanosarcina were the major players for the demethylation of methylated substrates and methane formation through the reduction pathway of methyl-S-CoM and possibly, acetyl-CoA synthase/decarbonylase-related pathways. Unlike high dominance of one Methanomethylovorans population in the CMSS reactor, diverse methylotrophic Methanosarcina species inhabited in syntrophy-like association with hydrogenotrophic Methanobacterium in the granular sludge of UASB reactor. The overall findings indicated the reactor-dependent community structures of quaternary amines degradation and provided microbial insight for the improved understanding of engineering application.

  4. Community and Proteomic Analysis of Anaerobic Consortia Converting Tetramethylammonium to Methane

    Science.gov (United States)

    Chen, Wei-Yu; Kraková, Lucia; Pangallo, Domenico; Jeszeová, Lenka; Liu, Bing; Yasui, Hidenari

    2017-01-01

    Tetramethylammonium-degrading methanogenic consortia from a complete-mixing suspended sludge (CMSS) and an upflow anaerobic sludge blanket (UASB) reactors were studied using multiple PCR-based molecular techniques and shotgun proteomic approach. The prokaryotic 16S rRNA genes of the consortia were analyzed by quantitative PCR, high-throughput sequencing, and DGGE-cloning methods. The results showed that methanogenic archaea were highly predominant in both reactors but differed markedly according to community structure. Community and proteomic analysis revealed that Methanomethylovorans and Methanosarcina were the major players for the demethylation of methylated substrates and methane formation through the reduction pathway of methyl-S-CoM and possibly, acetyl-CoA synthase/decarbonylase-related pathways. Unlike high dominance of one Methanomethylovorans population in the CMSS reactor, diverse methylotrophic Methanosarcina species inhabited in syntrophy-like association with hydrogenotrophic Methanobacterium in the granular sludge of UASB reactor. The overall findings indicated the reactor-dependent community structures of quaternary amines degradation and provided microbial insight for the improved understanding of engineering application. PMID:29391857

  5. High Throughput Synthesis and Screening for Agents Inhibiting Androgen Receptor Mediated Gene Transcription

    National Research Council Canada - National Science Library

    Boger, Dale L

    2005-01-01

    .... This entails the high throughput synthesis of DNA binding agents related to distamycin, their screening for binding to androgen response elements using a new high throughput DNA binding screen...

  6. High Throughput Synthesis and Screening for Agents Inhibiting Androgen Receptor Mediated Gene Transcription

    National Research Council Canada - National Science Library

    Boger, Dale

    2004-01-01

    .... This entails the high throughput synthesis of DNA binding agents related to distamycin, their screening for binding to androgen response elements using a new high throughput DNA binding screen...

  7. Five years later: the current status of the use of proteomics and transcriptomics in EMF research.

    Science.gov (United States)

    Leszczynski, Dariusz; de Pomerai, David; Koczan, Dirk; Stoll, Dieter; Franke, Helmut; Albar, Juan Pablo

    2012-08-01

    The World Health Organization's and Radiation and Nuclear Safety Authority's "Workshop on Application of Proteomics and Transcriptomics in Electromagnetic Fields Research" was held in Helsinki in the October/November 2005. As a consequence of this meeting, Proteomics journal published in 2006 a special issue "Application of Proteomics and Transcriptomics in EMF Research" (Vol. 6 No. 17; Guest Editor: D. Leszczynski). This Proteomics issue presented the status of research, of the effects of electromagnetic fields (EMF) using proteomics and transcriptomics methods, present in 2005. The current overview/opinion article presents the status of research in this area by reviewing all studies that were published by the end of 2010. The review work was a part of the European Cooperation in the Field of Scientific and Technical Research (COST) Action BM0704 that created a structure in which researchers in the field of EMF and health shared knowledge and information. The review was prepared by the members of the COST Action BM0704 task group on the high-throughput screening techniques and electromagnetic fields (TG-HTST-EMF). © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A CRISPR CASe for High-Throughput Silencing

    Directory of Open Access Journals (Sweden)

    Jacob eHeintze

    2013-10-01

    Full Text Available Manipulation of gene expression on a genome-wide level is one of the most important systematic tools in the post-genome era. Such manipulations have largely been enabled by expression cloning approaches using sequence-verified cDNA libraries, large-scale RNA interference libraries (shRNA or siRNA and zinc finger nuclease technologies. More recently, the CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated (Cas9-mediated gene editing technology has been described that holds great promise for future use of this technology in genomic manipulation. It was suggested that the CRISPR system has the potential to be used in high-throughput, large-scale loss of function screening. Here we discuss some of the challenges in engineering of CRISPR/Cas genomic libraries and some of the aspects that need to be addressed in order to use this technology on a high-throughput scale.

  9. High-Throughput Fabrication of Nanocomplexes Using 3D-Printed Micromixers

    DEFF Research Database (Denmark)

    Bohr, Adam; Boetker, Johan; Wang, Yingya

    2017-01-01

    3D printing allows a rapid and inexpensive manufacturing of custom made and prototype devices. Micromixers are used for rapid and controlled production of nanoparticles intended for therapeutic delivery. In this study, we demonstrate the fabrication of micromixers using computational design and 3D...... via bulk mixing. Moreover, each micromixer could process more than 2 liters per hour with unaffected performance and the setup could easily be scaled-up by aligning several micromixers in parallel. This demonstrates that 3D printing can be used to prepare disposable high-throughput micromixers...... printing, which enable a continuous and industrial scale production of nanocomplexes formed by electrostatic complexation, using the polymers poly(diallyldimethylammonium chloride) and poly(sodium 4-styrenesulfonate). Several parameters including polymer concentration, flow rate, and flow ratio were...

  10. msBiodat analysis tool, big data analysis for high-throughput experiments.

    Science.gov (United States)

    Muñoz-Torres, Pau M; Rokć, Filip; Belužic, Robert; Grbeša, Ivana; Vugrek, Oliver

    2016-01-01

    Mass spectrometry (MS) are a group of a high-throughput techniques used to increase knowledge about biomolecules. They produce a large amount of data which is presented as a list of hundreds or thousands of proteins. Filtering those data efficiently is the first step for extracting biologically relevant information. The filtering may increase interest by merging previous data with the data obtained from public databases, resulting in an accurate list of proteins which meet the predetermined conditions. In this article we present msBiodat Analysis Tool, a web-based application thought to approach proteomics to the big data analysis. With this tool, researchers can easily select the most relevant information from their MS experiments using an easy-to-use web interface. An interesting feature of msBiodat analysis tool is the possibility of selecting proteins by its annotation on Gene Ontology using its Gene Id, ensembl or UniProt codes. The msBiodat analysis tool is a web-based application that allows researchers with any programming experience to deal with efficient database querying advantages. Its versatility and user-friendly interface makes easy to perform fast and accurate data screening by using complex queries. Once the analysis is finished, the result is delivered by e-mail. msBiodat analysis tool is freely available at http://msbiodata.irb.hr.

  11. High-throughput materials discovery and development: breakthroughs and challenges in the mapping of the materials genome

    Science.gov (United States)

    Buongiorno Nardelli, Marco

    High-Throughput Quantum-Mechanics computation of materials properties by ab initio methods has become the foundation of an effective approach to materials design, discovery and characterization. This data driven approach to materials science currently presents the most promising path to the development of advanced technological materials that could solve or mitigate important social and economic challenges of the 21st century. In particular, the rapid proliferation of computational data on materials properties presents the possibility to complement and extend materials property databases where the experimental data is lacking and difficult to obtain. Enhanced repositories such as AFLOWLIB open novel opportunities for structure discovery and optimization, including uncovering of unsuspected compounds, metastable structures and correlations between various properties. The practical realization of these opportunities depends almost exclusively on the the design of efficient algorithms for electronic structure simulations of realistic material systems beyond the limitations of the current standard theories. In this talk, I will review recent progress in theoretical and computational tools, and in particular, discuss the development and validation of novel functionals within Density Functional Theory and of local basis representations for effective ab-initio tight-binding schemes. Marco Buongiorno Nardelli is a pioneer in the development of computational platforms for theory/data/applications integration rooted in his profound and extensive expertise in the design of electronic structure codes and in his vision for sustainable and innovative software development for high-performance materials simulations. His research activities range from the design and discovery of novel materials for 21st century applications in renewable energy, environment, nano-electronics and devices, the development of advanced electronic structure theories and high-throughput techniques in

  12. High throughput route selection in multi-rate wireless mesh networks

    Institute of Scientific and Technical Information of China (English)

    WEI Yi-fei; GUO Xiang-li; SONG Mei; SONG Jun-de

    2008-01-01

    Most existing Ad-hoc routing protocols use the shortest path algorithm with a hop count metric to select paths. It is appropriate in single-rate wireless networks, but has a tendency to select paths containing long-distance links that have low data rates and reduced reliability in multi-rate networks. This article introduces a high throughput routing algorithm utilizing the multi-rate capability and some mesh characteristics in wireless fidelity (WiFi) mesh networks. It uses the medium access control (MAC) transmission time as the routing metric, which is estimated by the information passed up from the physical layer. When the proposed algorithm is adopted, the Ad-hoc on-demand distance vector (AODV) routing can be improved as high throughput AODV (HT-AODV). Simulation results show that HT-AODV is capable of establishing a route that has high data-rate, short end-to-end delay and great network throughput.

  13. High-throughput phenotyping and genomic selection: the frontiers of crop breeding converge.

    Science.gov (United States)

    Cabrera-Bosquet, Llorenç; Crossa, José; von Zitzewitz, Jarislav; Serret, María Dolors; Araus, José Luis

    2012-05-01

    Genomic selection (GS) and high-throughput phenotyping have recently been captivating the interest of the crop breeding community from both the public and private sectors world-wide. Both approaches promise to revolutionize the prediction of complex traits, including growth, yield and adaptation to stress. Whereas high-throughput phenotyping may help to improve understanding of crop physiology, most powerful techniques for high-throughput field phenotyping are empirical rather than analytical and comparable to genomic selection. Despite the fact that the two methodological approaches represent the extremes of what is understood as the breeding process (phenotype versus genome), they both consider the targeted traits (e.g. grain yield, growth, phenology, plant adaptation to stress) as a black box instead of dissecting them as a set of secondary traits (i.e. physiological) putatively related to the target trait. Both GS and high-throughput phenotyping have in common their empirical approach enabling breeders to use genome profile or phenotype without understanding the underlying biology. This short review discusses the main aspects of both approaches and focuses on the case of genomic selection of maize flowering traits and near-infrared spectroscopy (NIRS) and plant spectral reflectance as high-throughput field phenotyping methods for complex traits such as crop growth and yield. © 2012 Institute of Botany, Chinese Academy of Sciences.

  14. High-throughput single nucleotide polymorphism genotyping using nanofluidic Dynamic Arrays

    Directory of Open Access Journals (Sweden)

    Crenshaw Andrew

    2009-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs have emerged as the genetic marker of choice for mapping disease loci and candidate gene association studies, because of their high density and relatively even distribution in the human genomes. There is a need for systems allowing medium multiplexing (ten to hundreds of SNPs with high throughput, which can efficiently and cost-effectively generate genotypes for a very large sample set (thousands of individuals. Methods that are flexible, fast, accurate and cost-effective are urgently needed. This is also important for those who work on high throughput genotyping in non-model systems where off-the-shelf assays are not available and a flexible platform is needed. Results We demonstrate the use of a nanofluidic Integrated Fluidic Circuit (IFC - based genotyping system for medium-throughput multiplexing known as the Dynamic Array, by genotyping 994 individual human DNA samples on 47 different SNP assays, using nanoliter volumes of reagents. Call rates of greater than 99.5% and call accuracies of greater than 99.8% were achieved from our study, which demonstrates that this is a formidable genotyping platform. The experimental set up is very simple, with a time-to-result for each sample of about 3 hours. Conclusion Our results demonstrate that the Dynamic Array is an excellent genotyping system for medium-throughput multiplexing (30-300 SNPs, which is simple to use and combines rapid throughput with excellent call rates, high concordance and low cost. The exceptional call rates and call accuracy obtained may be of particular interest to those working on validation and replication of genome- wide- association (GWA studies.

  15. Bioinformatical Analysis of Organ-Related (Heart, Brain, Liver, and Kidney and Serum Proteomic Data to Identify Protein Regulation Patterns and Potential Sepsis Biomarkers

    Directory of Open Access Journals (Sweden)

    Andreas Hohn

    2018-01-01

    Full Text Available During the last years, proteomic studies have revealed several interesting findings in experimental sepsis models and septic patients. However, most studies investigated protein alterations only in single organs or in whole blood. To identify possible sepsis biomarkers and to evaluate the relationship between protein alteration in sepsis affected organs and blood, proteomics data from the heart, brain, liver, kidney, and serum were analysed. Using functional network analyses in combination with hierarchical cluster analysis, we found that protein regulation patterns in organ tissues as well as in serum are highly dynamic. In the tissue proteome, the main functions and pathways affected were the oxidoreductive activity, cell energy generation, or metabolism, whereas in the serum proteome, functions were associated with lipoproteins metabolism and, to a minor extent, with coagulation, inflammatory response, and organ regeneration. Proteins from network analyses of organ tissue did not correlate with statistically significantly regulated serum proteins or with predicted proteins of serum functions. In this study, the combination of proteomic network analyses with cluster analyses is introduced as an approach to deal with high-throughput proteomics data to evaluate the dynamics of protein regulation during sepsis.

  16. SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.

    Science.gov (United States)

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-07-15

    In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.

  17. HTTK: R Package for High-Throughput Toxicokinetics

    Science.gov (United States)

    Thousands of chemicals have been profiled by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concent...

  18. iAB-RBC-283: A proteomically derived knowledge-base of erythrocyte metabolism that can be used to simulate its physiological and patho-physiological states

    Directory of Open Access Journals (Sweden)

    Palsson Bernhard O

    2011-07-01

    Full Text Available Abstract Background The development of high-throughput technologies capable of whole cell measurements of genes, proteins, and metabolites has led to the emergence of systems biology. Integrated analysis of the resulting omic data sets has proved to be hard to achieve. Metabolic network reconstructions enable complex relationships amongst molecular components to be represented formally in a biologically relevant manner while respecting physical constraints. In silico models derived from such reconstructions can then be queried or interrogated through mathematical simulations. Proteomic profiling studies of the mature human erythrocyte have shown more proteins present related to metabolic function than previously thought; however the significance and the causal consequences of these findings have not been explored. Results Erythrocyte proteomic data was used to reconstruct the most expansive description of erythrocyte metabolism to date, following extensive manual curation, assessment of the literature, and functional testing. The reconstruction contains 281 enzymes representing functions from glycolysis to cofactor and amino acid metabolism. Such a comprehensive view of erythrocyte metabolism implicates the erythrocyte as a potential biomarker for different diseases as well as a 'cell-based' drug-screening tool. The analysis shows that 94 erythrocyte enzymes are implicated in morbid single nucleotide polymorphisms, representing 142 pathologies. In addition, over 230 FDA-approved and experimental pharmaceuticals have enzymatic targets in the erythrocyte. Conclusion The advancement of proteomic technologies and increased generation of high-throughput proteomic data have created the need for a means to analyze these data in a coherent manner. Network reconstructions provide a systematic means to integrate and analyze proteomic data in a biologically meaning manner. Analysis of the red cell proteome has revealed an unexpected level of complexity in

  19. PCAS – a precomputed proteome annotation database resource

    Directory of Open Access Journals (Sweden)

    Luo Jingchu

    2003-11-01

    Full Text Available Abstract Background Many model proteomes or "complete" sets of proteins of given organisms are now publicly available. Much effort has been invested in computational annotation of those "draft" proteomes. Motif or domain based algorithms play a pivotal role in functional classification of proteins. Employing most available computational algorithms, mainly motif or domain recognition algorithms, we set up to develop an online proteome annotation system with integrated proteome annotation data to complement existing resources. Results We report here the development of PCAS (ProteinCentric Annotation System as an online resource of pre-computed proteome annotation data. We applied most available motif or domain databases and their analysis methods, including hmmpfam search of HMMs in Pfam, SMART and TIGRFAM, RPS-PSIBLAST search of PSSMs in CDD, pfscan of PROSITE patterns and profiles, as well as PSI-BLAST search of SUPERFAMILY PSSMs. In addition, signal peptide and TM are predicted using SignalP and TMHMM respectively. We mapped SUPERFAMILY and COGs to InterPro, so the motif or domain databases are integrated through InterPro. PCAS displays table summaries of pre-computed data and a graphical presentation of motifs or domains relative to the protein. As of now, PCAS contains human IPI, mouse IPI, and rat IPI, A. thaliana, C. elegans, D. melanogaster, S. cerevisiae, and S. pombe proteome. PCAS is available at http://pak.cbi.pku.edu.cn/proteome/gca.php Conclusion PCAS gives better annotation coverage for model proteomes by employing a wider collection of available algorithms. Besides presenting the most confident annotation data, PCAS also allows customized query so users can inspect statistically less significant boundary information as well. Therefore, besides providing general annotation information, PCAS could be used as a discovery platform. We plan to update PCAS twice a year. We will upgrade PCAS when new proteome annotation algorithms

  20. High-throughput volumetric reconstruction for 3D wheat plant architecture studies

    Directory of Open Access Journals (Sweden)

    Wei Fang

    2016-09-01

    Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.

  1. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    Science.gov (United States)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  2. Space Link Extension Protocol Emulation for High-Throughput, High-Latency Network Connections

    Science.gov (United States)

    Tchorowski, Nicole; Murawski, Robert

    2014-01-01

    New space missions require higher data rates and new protocols to meet these requirements. These high data rate space communication links push the limitations of not only the space communication links, but of the ground communication networks and protocols which forward user data to remote ground stations (GS) for transmission. The Consultative Committee for Space Data Systems, (CCSDS) Space Link Extension (SLE) standard protocol is one protocol that has been proposed for use by the NASA Space Network (SN) Ground Segment Sustainment (SGSS) program. New protocol implementations must be carefully tested to ensure that they provide the required functionality, especially because of the remote nature of spacecraft. The SLE protocol standard has been tested in the NASA Glenn Research Center's SCENIC Emulation Lab in order to observe its operation under realistic network delay conditions. More specifically, the delay between then NASA Integrated Services Network (NISN) and spacecraft has been emulated. The round trip time (RTT) delay for the continental NISN network has been shown to be up to 120ms; as such the SLE protocol was tested with network delays ranging from 0ms to 200ms. Both a base network condition and an SLE connection were tested with these RTT delays, and the reaction of both network tests to the delay conditions were recorded. Throughput for both of these links was set at 1.2Gbps. The results will show that, in the presence of realistic network delay, the SLE link throughput is significantly reduced while the base network throughput however remained at the 1.2Gbps specification. The decrease in SLE throughput has been attributed to the implementation's use of blocking calls. The decrease in throughput is not acceptable for high data rate links, as the link requires constant data a flow in order for spacecraft and ground radios to stay synchronized, unless significant data is queued a the ground station. In cases where queuing the data is not an option

  3. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    Science.gov (United States)

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database

  4. DOGMA: domain-based transcriptome and proteome quality assessment.

    Science.gov (United States)

    Dohmen, Elias; Kremer, Lukas P M; Bornberg-Bauer, Erich; Kemena, Carsten

    2016-09-01

    Genome studies have become cheaper and easier than ever before, due to the decreased costs of high-throughput sequencing and the free availability of analysis software. However, the quality of genome or transcriptome assemblies can vary a lot. Therefore, quality assessment of assemblies and annotations are crucial aspects of genome analysis pipelines. We developed DOGMA, a program for fast and easy quality assessment of transcriptome and proteome data based on conserved protein domains. DOGMA measures the completeness of a given transcriptome or proteome and provides information about domain content for further analysis. DOGMA provides a very fast way to do quality assessment within seconds. DOGMA is implemented in Python and published under GNU GPL v.3 license. The source code is available on https://ebbgit.uni-muenster.de/domainWorld/DOGMA/ CONTACTS: e.dohmen@wwu.de or c.kemena@wwu.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. High-throughput screening of effective siRNAs using luciferase-linked chimeric mRNA.

    Directory of Open Access Journals (Sweden)

    Shen Pang

    Full Text Available The use of siRNAs to knock down gene expression can potentially be an approach to treat various diseases. To avoid siRNA toxicity the less transcriptionally active H1 pol III promoter, rather than the U6 promoter, was proposed for siRNA expression. To identify highly efficacious siRNA sequences, extensive screening is required, since current computer programs may not render ideal results. Here, we used CCR5 gene silencing as a model to investigate a rapid and efficient screening approach. We constructed a chimeric luciferase-CCR5 gene for high-throughput screening of siRNA libraries. After screening approximately 900 shRNA clones, 12 siRNA sequences were identified. Sequence analysis demonstrated that most (11 of the 12 sequences of these siRNAs did not match those identified by available siRNA prediction algorithms. Significant inhibition of CCR5 in a T-lymphocyte cell line and primary T cells by these identified siRNAs was confirmed using the siRNA lentiviral vectors to infect these cells. The inhibition of CCR5 expression significantly protected cells from R5 HIV-1JRCSF infection. These results indicated that the high-throughput screening method allows efficient identification of siRNA sequences to inhibit the target genes at low levels of expression.

  6. Clinical validation of an ultra high-throughput spiral microfluidics for the detection and enrichment of viable circulating tumor cells.

    Directory of Open Access Journals (Sweden)

    Bee Luan Khoo

    Full Text Available Circulating tumor cells (CTCs are cancer cells that can be isolated via liquid biopsy from blood and can be phenotypically and genetically characterized to provide critical information for guiding cancer treatment. Current analysis of CTCs is hindered by the throughput, selectivity and specificity of devices or assays used in CTC detection and isolation.Here, we enriched and characterized putative CTCs from blood samples of patients with both advanced stage metastatic breast and lung cancers using a novel multiplexed spiral microfluidic chip. This system detected putative CTCs under high sensitivity (100%, n = 56 (Breast cancer samples: 12-1275 CTCs/ml; Lung cancer samples: 10-1535 CTCs/ml rapidly from clinically relevant blood volumes (7.5 ml under 5 min. Blood samples were completely separated into plasma, CTCs and PBMCs components and each fraction were characterized with immunophenotyping (Pan-cytokeratin/CD45, CD44/CD24, EpCAM, fluorescence in-situ hybridization (FISH (EML4-ALK or targeted somatic mutation analysis. We used an ultra-sensitive mass spectrometry based system to highlight the presence of an EGFR-activating mutation in both isolated CTCs and plasma cell-free DNA (cf-DNA, and demonstrate concordance with the original tumor-biopsy samples.We have clinically validated our multiplexed microfluidic chip for the ultra high-throughput, low-cost and label-free enrichment of CTCs. Retrieved cells were unlabeled and viable, enabling potential propagation and real-time downstream analysis using next generation sequencing (NGS or proteomic analysis.

  7. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    Science.gov (United States)

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  8. Two-dimensional materials from high-throughput computational exfoliation of experimentally known compounds

    Science.gov (United States)

    Mounet, Nicolas; Gibertini, Marco; Schwaller, Philippe; Campi, Davide; Merkys, Andrius; Marrazzo, Antimo; Sohier, Thibault; Castelli, Ivano Eligio; Cepellotti, Andrea; Pizzi, Giovanni; Marzari, Nicola

    2018-02-01

    Two-dimensional (2D) materials have emerged as promising candidates for next-generation electronic and optoelectronic applications. Yet, only a few dozen 2D materials have been successfully synthesized or exfoliated. Here, we search for 2D materials that can be easily exfoliated from their parent compounds. Starting from 108,423 unique, experimentally known 3D compounds, we identify a subset of 5,619 compounds that appear layered according to robust geometric and bonding criteria. High-throughput calculations using van der Waals density functional theory, validated against experimental structural data and calculated random phase approximation binding energies, further allowed the identification of 1,825 compounds that are either easily or potentially exfoliable. In particular, the subset of 1,036 easily exfoliable cases provides novel structural prototypes and simple ternary compounds as well as a large portfolio of materials to search from for optimal properties. For a subset of 258 compounds, we explore vibrational, electronic, magnetic and topological properties, identifying 56 ferromagnetic and antiferromagnetic systems, including half-metals and half-semiconductors.

  9. Leaderboard Now Open: CPTAC’s DREAM Proteogenomics Computational Challenge | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the opening of the leaderboard to its Proteogenomics Computational DREAM Challenge. The leadership board remains open for submissions during September 25, 2017 through October 8, 2017, with the Challenge expected to run until November 17, 2017.

  10. Proteomic evaluation of genetically modified crops: current status and challenges

    Science.gov (United States)

    Gong, Chun Yan; Wang, Tai

    2013-01-01

    Hectares of genetically modified (GM) crops have increased exponentially since 1996, when such crops began to be commercialized. GM biotechnology, together with conventional breeding, has become the main approach to improving agronomic traits of crops. However, people are concerned about the safety of GM crops, especially GM-derived food and feed. Many efforts have been made to evaluate the unintended effects caused by the introduction of exogenous genes. “Omics” techniques have advantages over targeted analysis in evaluating such crops because of their use of high-throughput screening. Proteins are key players in gene function and are directly involved in metabolism and cellular development or have roles as toxins, antinutrients, or allergens, which are essential for human health. Thus, proteomics can be expected to become one of the most useful tools in safety assessment. This review assesses the potential of proteomics in evaluating various GM crops. We further describe the challenges in ensuring homogeneity and sensitivity in detection techniques. PMID:23471542

  11. Proteomic evaluation of genetically modified crops: current status and challenges.

    Science.gov (United States)

    Gong, Chun Yan; Wang, Tai

    2013-01-01

    Hectares of genetically modified (GM) crops have increased exponentially since 1996, when such crops began to be commercialized. GM biotechnology, together with conventional breeding, has become the main approach to improving agronomic traits of crops. However, people are concerned about the safety of GM crops, especially GM-derived food and feed. Many efforts have been made to evaluate the unintended effects caused by the introduction of exogenous genes. "Omics" techniques have advantages over targeted analysis in evaluating such crops because of their use of high-throughput screening. Proteins are key players in gene function and are directly involved in metabolism and cellular development or have roles as toxins, antinutrients, or allergens, which are essential for human health. Thus, proteomics can be expected to become one of the most useful tools in safety assessment. This review assesses the potential of proteomics in evaluating various GM crops. We further describe the challenges in ensuring homogeneity and sensitivity in detection techniques.

  12. Proteomic evaluation of genetically modified crops: current status and challenges

    Directory of Open Access Journals (Sweden)

    Chun Yan Gong

    2013-03-01

    Full Text Available Hectares of genetically modified (GM crops have increased exponentially since 1996, when such crops began to be commercialized. GM biotechnology, together with conventional breeding, has become the main approach to improving agronomic traits of crops. However, people are concerned about the safety of GM crops, especially GM-derived food and feed. Many efforts have been made to evaluate the unintended effects caused by the introduction of exogenous genes. Omics techniques have advantages over targeted analysis in evaluating such crops because of their use of high-throughput screening. Proteins are key players in gene function and are directly involved in metabolism and cellular development or have roles as toxins, antinutrients or allergens, which are essential for human health. Thus, proteomics can be expected to become one of the most useful tools in safety assessment. This review assesses the potential of proteomics in evaluating various GM crops. We further describe the challenges in ensuring homogeneity and sensitivity in detection techniques.

  13. BioInfra.Prot: A comprehensive proteomics workflow including data standardization, protein inference, expression analysis and data publication.

    Science.gov (United States)

    Turewicz, Michael; Kohl, Michael; Ahrens, Maike; Mayer, Gerhard; Uszkoreit, Julian; Naboulsi, Wael; Bracht, Thilo; Megger, Dominik A; Sitek, Barbara; Marcus, Katrin; Eisenacher, Martin

    2017-11-10

    The analysis of high-throughput mass spectrometry-based proteomics data must address the specific challenges of this technology. To this end, the comprehensive proteomics workflow offered by the de.NBI service center BioInfra.Prot provides indispensable components for the computational and statistical analysis of this kind of data. These components include tools and methods for spectrum identification and protein inference, protein quantification, expression analysis as well as data standardization and data publication. All particular methods of the workflow which address these tasks are state-of-the-art or cutting edge. As has been shown in previous publications, each of these methods is adequate to solve its specific task and gives competitive results. However, the methods included in the workflow are continuously reviewed, updated and improved to adapt to new scientific developments. All of these particular components and methods are available as stand-alone BioInfra.Prot services or as a complete workflow. Since BioInfra.Prot provides manifold fast communication channels to get access to all components of the workflow (e.g., via the BioInfra.Prot ticket system: bioinfraprot@rub.de) users can easily benefit from this service and get support by experts. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Link Analysis of High Throughput Spacecraft Communication Systems for Future Science Missions

    Science.gov (United States)

    Simons, Rainee N.

    2015-01-01

    NASA's plan to launch several spacecrafts into low Earth Orbit (LEO) to support science missions in the next ten years and beyond requires down link throughput on the order of several terabits per day. The ability to handle such a large volume of data far exceeds the capabilities of current systems. This paper proposes two solutions, first, a high data rate link between the LEO spacecraft and ground via relay satellites in geostationary orbit (GEO). Second, a high data rate direct to ground link from LEO. Next, the paper presents results from computer simulations carried out for both types of links taking into consideration spacecraft transmitter frequency, EIRP, and waveform; elevation angle dependent path loss through Earths atmosphere, and ground station receiver GT.

  15. High-throughput screening to identify inhibitors of lysine demethylases.

    Science.gov (United States)

    Gale, Molly; Yan, Qin

    2015-01-01

    Lysine demethylases (KDMs) are epigenetic regulators whose dysfunction is implicated in the pathology of many human diseases including various types of cancer, inflammation and X-linked intellectual disability. Particular demethylases have been identified as promising therapeutic targets, and tremendous efforts are being devoted toward developing suitable small-molecule inhibitors for clinical and research use. Several High-throughput screening strategies have been developed to screen for small-molecule inhibitors of KDMs, each with advantages and disadvantages in terms of time, cost, effort, reliability and sensitivity. In this Special Report, we review and evaluate the High-throughput screening methods utilized for discovery of novel small-molecule KDM inhibitors.

  16. High throughput protein production screening

    Science.gov (United States)

    Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA

    2009-09-08

    Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.

  17. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  18. High throughput electrospinning of high-quality nanofibers via an aluminum disk spinneret

    Science.gov (United States)

    Zheng, Guokuo

    In this work, a simple and efficient needleless high throughput electrospinning process using an aluminum disk spinneret with 24 holes is described. Electrospun mats produced by this setup consisted of fine fibers (nano-sized) of the highest quality while the productivity (yield) was many times that obtained from conventional single-needle electrospinning. The goal was to produce scaled-up amounts of the same or better quality nanofibers under variable concentration, voltage, and the working distance than those produced with the single needle lab setting. The fiber mats produced were either polymer or ceramic (such as molybdenum trioxide nanofibers). Through experimentation the optimum process conditions were defined to be: 24 kilovolt, a distance to collector of 15cm. More diluted solutions resulted in smaller diameter fibers. Comparing the morphologies of the nanofibers of MoO3 produced by both the traditional and the high throughput set up it was found that they were very similar. Moreover, the nanofibers production rate is nearly 10 times than that of traditional needle electrospinning. Thus, the high throughput process has the potential to become an industrial nanomanufacturing process and the materials processed by it may be used as filtration devices, in tissue engineering, and as sensors.

  19. Bacterial Survival under Extreme UV Radiation: A Comparative Proteomics Study of Rhodobacter sp., Isolated from High Altitude Wetlands in Chile

    Directory of Open Access Journals (Sweden)

    Vilma Pérez

    2017-06-01

    Full Text Available Salar de Huasco, defined as a polyextreme environment, is a high altitude saline wetland in the Chilean Altiplano (3800 m.a.s.l., permanently exposed to the highest solar radiation doses registered in the world. We present here the first comparative proteomics study of a photoheterotrophic bacterium, Rhodobacter sp., isolated from this remote and hostile habitat. We developed an innovative experimental approach using different sources of radiation (in situ sunlight and UVB lamps, cut-off filters (Mylar, Lee filters and a high-throughput, label-free quantitative proteomics method to comprehensively analyze the effect of seven spectral bands on protein regulation. A hierarchical cluster analysis of 40 common proteins revealed that all conditions containing the most damaging UVB radiation induced similar pattern of protein regulation compared with UVA and visible light spectral bands. Moreover, it appeared that the cellular adaptation of Rhodobacter sp. to osmotic stress encountered in the hypersaline environment from which it was originally isolated, might further a higher resistance to damaging UV radiation. Indeed, proteins involved in the synthesis and transport of key osmoprotectants, such as glycine betaine and inositol, were found in very high abundance under UV radiation compared to the dark control, suggesting the function of osmolytes as efficient reactive oxygen scavengers. Our study also revealed a RecA-independent response and a tightly regulated network of protein quality control involving proteases and chaperones to selectively degrade misfolded and/or damaged proteins.

  20. Solion ion source for high-efficiency, high-throughput solar cell manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Koo, John, E-mail: john-koo@amat.com; Binns, Brant; Miller, Timothy; Krause, Stephen; Skinner, Wesley; Mullin, James [Applied Materials, Inc., Varian Semiconductor Equipment Business Unit, 35 Dory Road, Gloucester, Massachusetts 01930 (United States)

    2014-02-15

    In this paper, we introduce the Solion ion source for high-throughput solar cell doping. As the source power is increased to enable higher throughput, negative effects degrade the lifetime of the plasma chamber and the extraction electrodes. In order to improve efficiency, we have explored a wide range of electron energies and determined the conditions which best suit production. To extend the lifetime of the source we have developed an in situ cleaning method using only existing hardware. With these combinations, source life-times of >200 h for phosphorous and >100 h for boron ion beams have been achieved while maintaining 1100 cell-per-hour production.

  1. High throughput integrated thermal characterization with non-contact optical calorimetry

    Science.gov (United States)

    Hou, Sichao; Huo, Ruiqing; Su, Ming

    2017-10-01

    Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.

  2. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    Science.gov (United States)

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  3. Filtering high-throughput protein-protein interaction data using a combination of genomic features

    Directory of Open Access Journals (Sweden)

    Patil Ashwini

    2005-04-01

    Full Text Available Abstract Background Protein-protein interaction data used in the creation or prediction of molecular networks is usually obtained from large scale or high-throughput experiments. This experimental data is liable to contain a large number of spurious interactions. Hence, there is a need to validate the interactions and filter out the incorrect data before using them in prediction studies. Results In this study, we use a combination of 3 genomic features – structurally known interacting Pfam domains, Gene Ontology annotations and sequence homology – as a means to assign reliability to the protein-protein interactions in Saccharomyces cerevisiae determined by high-throughput experiments. Using Bayesian network approaches, we show that protein-protein interactions from high-throughput data supported by one or more genomic features have a higher likelihood ratio and hence are more likely to be real interactions. Our method has a high sensitivity (90% and good specificity (63%. We show that 56% of the interactions from high-throughput experiments in Saccharomyces cerevisiae have high reliability. We use the method to estimate the number of true interactions in the high-throughput protein-protein interaction data sets in Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens to be 27%, 18% and 68% respectively. Our results are available for searching and downloading at http://helix.protein.osaka-u.ac.jp/htp/. Conclusion A combination of genomic features that include sequence, structure and annotation information is a good predictor of true interactions in large and noisy high-throughput data sets. The method has a very high sensitivity and good specificity and can be used to assign a likelihood ratio, corresponding to the reliability, to each interaction.

  4. The main challenges that remain in applying high-throughput sequencing to clinical diagnostics.

    Science.gov (United States)

    Loeffelholz, Michael; Fofanov, Yuriy

    2015-01-01

    Over the last 10 years, the quality, price and availability of high-throughput sequencing instruments have improved to the point that this technology may be close to becoming a routine tool in the diagnostic microbiology laboratory. Two groups of challenges, however, have to be resolved in order to move this powerful research technology into routine use in the clinical microbiology laboratory. The computational/bioinformatics challenges include data storage cost and privacy concerns, requiring analysis to be performed without access to cloud storage or expensive computational infrastructure. The logistical challenges include interpretation of complex results and acceptance and understanding of the advantages and limitations of this technology by the medical community. This article focuses on the approaches to address these challenges, such as file formats, algorithms, data collection, reporting and good laboratory practices.

  5. Less is More: Membrane Protein Digestion Beyond Urea–Trypsin Solution for Next-level Proteomics*

    Science.gov (United States)

    Zhang, Xi

    2015-01-01

    The goal of next-level bottom-up membrane proteomics is protein function investigation, via high-coverage high-throughput peptide-centric quantitation of expression, modifications and dynamic structures at systems scale. Yet efficient digestion of mammalian membrane proteins presents a daunting barrier, and prevalent day-long urea–trypsin in-solution digestion proved insufficient to reach this goal. Many efforts contributed incremental advances over past years, but involved protein denaturation that disconnected measurement from functional states. Beyond denaturation, the recent discovery of structure/proteomics omni-compatible detergent n-dodecyl-β-d-maltopyranoside, combined with pepsin and PNGase F columns, enabled breakthroughs in membrane protein digestion: a 2010 DDM-low-TCEP (DLT) method for H/D-exchange (HDX) using human G protein-coupled receptor, and a 2015 flow/detergent-facilitated protease and de-PTM digestions (FDD) for integrative deep sequencing and quantitation using full-length human ion channel complex. Distinguishing protein solubilization from denaturation, protease digestion reliability from theoretical specificity, and reduction from alkylation, these methods shifted day(s)-long paradigms into minutes, and afforded fully automatable (HDX)-protein-peptide-(tandem mass tag)-HPLC pipelines to instantly measure functional proteins at deep coverage, high peptide reproducibility, low artifacts and minimal leakage. Promoting—not destroying—structures and activities harnessed membrane proteins for the next-level streamlined functional proteomics. This review analyzes recent advances in membrane protein digestion methods and highlights critical discoveries for future proteomics. PMID:26081834

  6. Less is More: Membrane Protein Digestion Beyond Urea-Trypsin Solution for Next-level Proteomics.

    Science.gov (United States)

    Zhang, Xi

    2015-09-01

    The goal of next-level bottom-up membrane proteomics is protein function investigation, via high-coverage high-throughput peptide-centric quantitation of expression, modifications and dynamic structures at systems scale. Yet efficient digestion of mammalian membrane proteins presents a daunting barrier, and prevalent day-long urea-trypsin in-solution digestion proved insufficient to reach this goal. Many efforts contributed incremental advances over past years, but involved protein denaturation that disconnected measurement from functional states. Beyond denaturation, the recent discovery of structure/proteomics omni-compatible detergent n-dodecyl-β-d-maltopyranoside, combined with pepsin and PNGase F columns, enabled breakthroughs in membrane protein digestion: a 2010 DDM-low-TCEP (DLT) method for H/D-exchange (HDX) using human G protein-coupled receptor, and a 2015 flow/detergent-facilitated protease and de-PTM digestions (FDD) for integrative deep sequencing and quantitation using full-length human ion channel complex. Distinguishing protein solubilization from denaturation, protease digestion reliability from theoretical specificity, and reduction from alkylation, these methods shifted day(s)-long paradigms into minutes, and afforded fully automatable (HDX)-protein-peptide-(tandem mass tag)-HPLC pipelines to instantly measure functional proteins at deep coverage, high peptide reproducibility, low artifacts and minimal leakage. Promoting-not destroying-structures and activities harnessed membrane proteins for the next-level streamlined functional proteomics. This review analyzes recent advances in membrane protein digestion methods and highlights critical discoveries for future proteomics. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. HDAT: web-based high-throughput screening data analysis tools

    International Nuclear Information System (INIS)

    Liu, Rong; Hassan, Taimur; Rallo, Robert; Cohen, Yoram

    2013-01-01

    The increasing utilization of high-throughput screening (HTS) in toxicity studies of engineered nano-materials (ENMs) requires tools for rapid and reliable processing and analyses of large HTS datasets. In order to meet this need, a web-based platform for HTS data analyses tools (HDAT) was developed that provides statistical methods suitable for ENM toxicity data. As a publicly available computational nanoinformatics infrastructure, HDAT provides different plate normalization methods, various HTS summarization statistics, self-organizing map (SOM)-based clustering analysis, and visualization of raw and processed data using both heat map and SOM. HDAT has been successfully used in a number of HTS studies of ENM toxicity, thereby enabling analysis of toxicity mechanisms and development of structure–activity relationships for ENM toxicity. The online approach afforded by HDAT should encourage standardization of and future advances in HTS as well as facilitate convenient inter-laboratory comparisons of HTS datasets. (paper)

  8. Yeast expression proteomics by high-resolution mass spectrometry

    DEFF Research Database (Denmark)

    Walther, Tobias C; Olsen, Jesper Velgaard; Mann, Matthias

    2010-01-01

    -translational controls contribute majorly to regulation of protein abundance, for example in heat shock stress response. The development of new sample preparation methods, high-resolution mass spectrometry and novel bioinfomatic tools close this gap and allow the global quantitation of the yeast proteome under different...

  9. MIPHENO: Data normalization for high throughput metabolic analysis.

    Science.gov (United States)

    High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...

  10. Preparation of polymer brushes grafted graphene oxide by atom transfer radical polymerization as a new support for trypsin immobilization and efficient proteome digestion.

    Science.gov (United States)

    Guo, Cong; Zhao, Xinyuan; Zhang, Wanjun; Bai, Haihong; Qin, Weijie; Song, Haifeng; Qian, Xiaohong

    2017-08-01

    Highly efficient protein digestion is one of the key issues in the "bottom-up" strategy-based proteomic studies. Compared with the time-consuming solution-based free protease digestion, immobilized protease digestion offers a promising alternative with obviously improved sample processing throughput. In this study, we proposed a new immobilized protease digestion strategy using two kinds of polymer-grafted graphene oxide (GO) conjugated trypsin. The polymer brush grafted GO was prepared using in situ polymer growth on initiator-functionalized GO using surface-initiated atom transfer radical polymerization (SI-ATRP) and characterized by AFM, TEM, TGA, and XPS. The polymer brush grafted GO supports three-dimensional trypsin immobilization, which not only increases the loading amount but also improves accessibility towards protein substrates. Both of the two types of immobilized trypsin provide 700 times shorter digestion time, while maintaining comparable protein/peptide identification scale compared with that of free trypsin digestion. More interestingly, combined application of the two types of immobilized trypsin with different surface-grafted polymers leads to at least 18.3/31.3% enhancement in protein/peptide identification compared with that obtained by digestion using a single type, indicating the potential of this digestion strategy for deeper proteome coverage using limited mass spectrometer machine hour. We expect these advantages may find valuable application in high throughput clinical proteomic studies, which often involve processing of a large number of samples. Graphical abstract Preparation of polymer brushes grafted and trypsin immobilized graphene oxide and its application in proteome digestion and mass spectrometry identification.

  11. High-Throughput Analysis and Automation for Glycomics Studies

    NARCIS (Netherlands)

    Shubhakar, A.; Reiding, K.R.; Gardner, R.A.; Spencer, D.I.R.; Fernandes, D.L.; Wuhrer, M.

    2015-01-01

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing

  12. Evaluation of Meta scheduler Architectures and Task assignment Policies for High throughput Computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper we present a model and simulator for many clusters of heterogeneous PCs belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. These studies indicate that the simulator is consistent. Next, we do the comparison with a real batch system and we obtain an average error of 10.5\\% for the response time and 12\\% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called \\dirac in a high throughput context. We justify our decentralized, adaptive and opportunistic approach in comparison to a centralized approach in such a context.

  13. Accelerating the design of biomimetic materials by integrating RNA-seq with proteomics and materials science.

    Science.gov (United States)

    Guerette, Paul A; Hoon, Shawn; Seow, Yiqi; Raida, Manfred; Masic, Admir; Wong, Fong T; Ho, Vincent H B; Kong, Kiat Whye; Demirel, Melik C; Pena-Francesch, Abdon; Amini, Shahrouz; Tay, Gavin Z; Ding, Dawei; Miserez, Ali

    2013-10-01

    Efforts to engineer new materials inspired by biological structures are hampered by the lack of genomic data from many model organisms studied in biomimetic research. Here we show that biomimetic engineering can be accelerated by integrating high-throughput RNA-seq with proteomics and advanced materials characterization. This approach can be applied to a broad range of systems, as we illustrate by investigating diverse high-performance biological materials involved in embryo protection, adhesion and predation. In one example, we rapidly engineer recombinant squid sucker ring teeth proteins into a range of structural and functional materials, including nanopatterned surfaces and photo-cross-linked films that exceed the mechanical properties of most natural and synthetic polymers. Integrating RNA-seq with proteomics and materials science facilitates the molecular characterization of natural materials and the effective translation of their molecular designs into a wide range of bio-inspired materials.

  14. Arioc: high-throughput read alignment with GPU-accelerated exploration of the seed-and-extend search space

    Directory of Open Access Journals (Sweden)

    Richard Wilton

    2015-03-01

    Full Text Available When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license.

  15. HASE: Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    G.V. Roshchupkin (Gennady); H.H.H. Adams (Hieab); M.W. Vernooij (Meike); A. Hofman (Albert); C.M. van Duijn (Cornelia); M.K. Ikram (Kamran); W.J. Niessen (Wiro)

    2016-01-01

    textabstractHigh-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  16. HASE : Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    Roshchupkin, G. V.; Adams, H; Vernooij, Meike W.; Hofman, A; Van Duijn, C. M.; Ikram, M. Arfan; Niessen, W.J.

    2016-01-01

    High-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  17. High-throughput protein crystallization on the World Community Grid and the GPU

    International Nuclear Information System (INIS)

    Kotseruba, Yulia; Cumbaa, Christian A; Jurisica, Igor

    2012-01-01

    We have developed CPU and GPU versions of an automated image analysis and classification system for protein crystallization trial images from the Hauptman Woodward Institute's High-Throughput Screening lab. The analysis step computes 12,375 numerical features per image. Using these features, we have trained a classifier that distinguishes 11 different crystallization outcomes, recognizing 80% of all crystals, 94% of clear drops, 94% of precipitates. The computing requirements for this analysis system are large. The complete HWI archive of 120 million images is being processed by the donated CPU cycles on World Community Grid, with a GPU phase launching in early 2012. The main computational burden of the analysis is the measure of textural (GLCM) features within the image at multiple neighbourhoods, distances, and at multiple greyscale intensity resolutions. CPU runtime averages 4,092 seconds (single threaded) on an Intel Xeon, but only 65 seconds on an NVIDIA Tesla C2050. We report on the process of adapting the C++ code to OpenCL, optimized for multiple platforms.

  18. A qualitative and quantitative evaluation of the peptide characteristics of microwave- and ultrasound-assisted digestion in discovery and targeted proteomic analyses.

    Science.gov (United States)

    Guo, Zhengguang; Cheng, Jie; Sun, Haidan; Sun, Wei

    2017-08-30

    Fast digestion methods can dramatically accelerate enzyme digestion and increase the throughput of proteomic analysis. However, the peptide characteristics of fast digestion methods and their performance in discovery and targeted proteomic analysis must be systematically evaluated. Three digestion methods, including overnight digestion, microwave-assisted protein enzymatic digestion (MAPED), and high-intensity focused ultrasonic-assisted enzymatic digestion (HIFUSAED), in trypsin or in trypsin/Lys-C were comprehensively compared in both discovery and targeted proteomics analysis using the HeLa cell proteome. In discovery proteomic analysis, the highest numbers of peptides and proteins were identified when the sample was digested via the MAPED method with trypsin/Lys-C. The fast digestion methods showed a higher mis-cleavage rate and a lower semi-tryptic rate than the overnight digestion method. In both label-free quantitative analysis and targeted proteomic analysis, both fully cleaved peptides (FCPs) and mis-cleaved peptides (MCPs) from the fast digestion methods and the overnight digestion method showed good reproducibility if they showed good abundance. When both the FCPs and MCPs were included in the analysis, the MAPED with trypsin/Lys-C method showed the best results for both discovery proteomic analysis and relative quantitative targeted proteomic analysis. These results will be beneficial for the application of fast digestion methods to proteomics. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Characterizing ncRNAs in human pathogenic protists using high-throughput sequencing technology

    Directory of Open Access Journals (Sweden)

    Lesley Joan Collins

    2011-12-01

    Full Text Available ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, snoRNAs and long ncRNAs on a genomic scale making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases.

  20. Characterizing ncRNAs in Human Pathogenic Protists Using High-Throughput Sequencing Technology

    Science.gov (United States)

    Collins, Lesley Joan

    2011-01-01

    ncRNAs are key genes in many human diseases including cancer and viral infection, as well as providing critical functions in pathogenic organisms such as fungi, bacteria, viruses, and protists. Until now the identification and characterization of ncRNAs associated with disease has been slow or inaccurate requiring many years of testing to understand complicated RNA and protein gene relationships. High-throughput sequencing now offers the opportunity to characterize miRNAs, siRNAs, small nucleolar RNAs (snoRNAs), and long ncRNAs on a genomic scale, making it faster and easier to clarify how these ncRNAs contribute to the disease state. However, this technology is still relatively new, and ncRNA discovery is not an application of high priority for streamlined bioinformatics. Here we summarize background concepts and practical approaches for ncRNA analysis using high-throughput sequencing, and how it relates to understanding human disease. As a case study, we focus on the parasitic protists Giardia lamblia and Trichomonas vaginalis, where large evolutionary distance has meant difficulties in comparing ncRNAs with those from model eukaryotes. A combination of biological, computational, and sequencing approaches has enabled easier classification of ncRNA classes such as snoRNAs, but has also aided the identification of novel classes. It is hoped that a higher level of understanding of ncRNA expression and interaction may aid in the development of less harsh treatment for protist-based diseases. PMID:22303390

  1. Accelerated proteomic visualization of individual predatory venoms of Conus purpurascens reveals separately evolved predation-evoked venom cabals.

    Science.gov (United States)

    Himaya, S W A; Marí, Frank; Lewis, Richard J

    2018-01-10

    Cone snail venoms have separately evolved for predation and defense. Despite remarkable inter- and intra-species variability, defined sets of synergistic venom peptides (cabals) are considered essential for prey capture by cone snails. To better understand the role of predatory cabals in cone snails, we used a high-throughput proteomic data mining and visualisation approach. Using this approach, the relationship between the predatory venom peptides from nine C. purpurascens was systematically analysed. Surprisingly, potentially synergistic levels of κ-PVIIA and δ-PVIA were only identified in five of nine specimens. In contrast, the remaining four specimens lacked significant levels of these known excitotoxins and instead contained high levels of the muscle nAChR blockers ψ-PIIIE and αA-PIVA. Interestingly, one of nine specimens expressed both cabals, suggesting that these sub-groups might represent inter-breeding sub-species of C. purpurascens. High throughput cluster analysis also revealed these two cabals clustered with distinct groups of venom peptides that are presently uncharacterised. This is the first report showing that the cone snails of the same species can deploy two separate and distinct predatory cabals for prey capture and shows that the cabals deployed by this species can be more complex than presently realized. Our semi-automated proteomic analysis facilitates the deconvolution of complex venoms to identify co-evolved families of peptides and help unravel their evolutionary relationships in complex venoms.

  2. A community proposal to integrate proteomics activities in ELIXIR

    NARCIS (Netherlands)

    Vizcaíno, Juan Antonio; Walzer, Mathias; Jiménez, Rafael C; Bittremieux, Wout; Bouyssié, David; Carapito, Christine; Corrales, Fernando; Ferro, Myriam; Heck, Albert J R; Horvatovich, Peter; Hubalek, Martin; Lane, Lydie; Laukens, Kris; Levander, Fredrik; Lisacek, Frederique; Novak, Petr; Palmblad, Magnus; Piovesan, Damiano; Pühler, Alfred; Schwämmle, Veit; Valkenborg, Dirk; van Rijswijk, Merlijn; Vondrasek, Jiri; Eisenacher, Martin; Martens, Lennart; Kohlbacher, Oliver

    2017-01-01

    Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this

  3. High throughput materials research and development for lithium ion batteries

    Directory of Open Access Journals (Sweden)

    Parker Liu

    2017-09-01

    Full Text Available Development of next generation batteries requires a breakthrough in materials. Traditional one-by-one method, which is suitable for synthesizing large number of sing-composition material, is time-consuming and costly. High throughput and combinatorial experimentation, is an effective method to synthesize and characterize huge amount of materials over a broader compositional region in a short time, which enables to greatly speed up the discovery and optimization of materials with lower cost. In this work, high throughput and combinatorial materials synthesis technologies for lithium ion battery research are discussed, and our efforts on developing such instrumentations are introduced.

  4. Towards sensitive, high-throughput, biomolecular assays based on fluorescence lifetime

    Science.gov (United States)

    Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie

    2017-09-01

    Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.

  5. High-throughput cloning and expression in recalcitrant bacteria

    NARCIS (Netherlands)

    Geertsma, Eric R.; Poolman, Bert

    We developed a generic method for high-throughput cloning in bacteria that are less amenable to conventional DNA manipulations. The method involves ligation-independent cloning in an intermediary Escherichia coli vector, which is rapidly converted via vector-backbone exchange (VBEx) into an

  6. High-throughput Molecular Simulations of MOFs for CO2 Separation: Opportunities and Challenges

    Science.gov (United States)

    Erucar, Ilknur; Keskin, Seda

    2018-02-01

    Metal organic frameworks (MOFs) have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure-performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.

  7. Target identification of natural and traditional medicines with quantitative chemical proteomics approaches.

    Science.gov (United States)

    Wang, Jigang; Gao, Liqian; Lee, Yew Mun; Kalesh, Karunakaran A; Ong, Yong Siang; Lim, Jaehong; Jee, Joo-Eun; Sun, Hongyan; Lee, Su Seong; Hua, Zi-Chun; Lin, Qingsong

    2016-06-01

    Natural and traditional medicines, being a great source of drugs and drug leads, have regained wide interests due to the limited success of high-throughput screening of compound libraries in the past few decades and the recent technology advancement. Many drugs/bioactive compounds exert their functions through interaction with their protein targets, with more and more drugs showing their ability to target multiple proteins, thus target identification has an important role in drug discovery and biomedical research fields. Identifying drug targets not only furthers the understanding of the mechanism of action (MOA) of a drug but also reveals its potential therapeutic applications and adverse side effects. Chemical proteomics makes use of affinity chromatography approaches coupled with mass spectrometry to systematically identify small molecule-protein interactions. Although traditional affinity-based chemical proteomics approaches have made great progress in the identification of cellular targets and elucidation of MOAs of many bioactive molecules, nonspecific binding remains a major issue which may reduce the accuracy of target identification and may hamper the drug development process. Recently, quantitative proteomics approaches, namely, metabolic labeling, chemical labeling, or label-free approaches, have been implemented in target identification to overcome such limitations. In this review, we will summarize and discuss the recent advances in the application of various quantitative chemical proteomics approaches for the identification of targets of natural and traditional medicines. Copyright © 2016. Published by Elsevier Inc.

  8. Characterization of the porcine synovial fluid proteome and a comparison to the plasma proteome

    Directory of Open Access Journals (Sweden)

    Tue Bjerg Bennike

    2015-12-01

    In addition, we analyzed the proteome of human plasma, and compared the proteomes to the obtained porcine synovial fluid proteome. The proteome of the two body fluids were found highly similar, underlining the detected plasma derived nature of many synovial fluid components. The healthy porcine synovial fluid proteomics data, human rheumatoid arthritis synovial fluid proteomics data used in the method optimization, human plasma proteomics data, and search results, have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD000935.

  9. Effects of High-Pressure Treatment on the Muscle Proteome of Hake by Bottom-Up Proteomics.

    Science.gov (United States)

    Carrera, Mónica; Fidalgo, Liliana G; Saraiva, Jorge A; Aubourg, Santiago P

    2018-05-02

    A bottom-up proteomics approach was applied for the study of the effects of high-pressure (HP) treatment on the muscle proteome of fish. The performance of the approach was established for a previous HP treatment (150-450 MPa for 2 min) on frozen (up to 5 months at -10 °C) European hake ( Merluccius merluccius). Concerning possible protein biomarkers of quality changes, a significant degradation after applying a pressure ≥430 MPa could be observed for phosphoglycerate mutase-1, enolase, creatine kinase, fructose bisphosphate aldolase, triosephosphate isomerase, and nucleoside diphosphate kinase; contrary, electrophoretic bands assigned to tropomyosin, glyceraldehyde-3-phosphate dehydrogenase, and beta parvalbumin increased their intensity after applying a pressure ≥430 MPa. This repository of potential protein biomarkers may be very useful for further HP investigations related to fish quality.

  10. Fluorescence-based high-throughput screening of dicer cleavage activity.

    Science.gov (United States)

    Podolska, Katerina; Sedlak, David; Bartunek, Petr; Svoboda, Petr

    2014-03-01

    Production of small RNAs by ribonuclease III Dicer is a key step in microRNA and RNA interference pathways, which employ Dicer-produced small RNAs as sequence-specific silencing guides. Further studies and manipulations of microRNA and RNA interference pathways would benefit from identification of small-molecule modulators. Here, we report a study of a fluorescence-based in vitro Dicer cleavage assay, which was adapted for high-throughput screening. The kinetic assay can be performed under single-turnover conditions (35 nM substrate and 70 nM Dicer) in a small volume (5 µL), which makes it suitable for high-throughput screening in a 1536-well format. As a proof of principle, a small library of bioactive compounds was analyzed, demonstrating potential of the assay.

  11. Repurposing a Benchtop Centrifuge for High-Throughput Single-Molecule Force Spectroscopy.

    Science.gov (United States)

    Yang, Darren; Wong, Wesley P

    2018-01-01

    We present high-throughput single-molecule manipulation using a benchtop centrifuge, overcoming limitations common in other single-molecule approaches such as high cost, low throughput, technical difficulty, and strict infrastructure requirements. An inexpensive and compact Centrifuge Force Microscope (CFM) adapted to a commercial centrifuge enables use by nonspecialists, and integration with DNA nanoswitches facilitates both reliable measurements and repeated molecular interrogation. Here, we provide detailed protocols for constructing the CFM, creating DNA nanoswitch samples, and carrying out single-molecule force measurements.

  12. Genomics and proteomics: Applications in autoimmune diseases

    Directory of Open Access Journals (Sweden)

    Wolfgang Hueber

    2009-08-01

    Full Text Available Wolfgang Hueber1,2,3, William H Robinson1,21VA Palo Alto Health Care System, Palo Alto, CA, USA; 2Division of Immunology and Rheumatology, Stanford University School of Medicine, Stanford, CA, USA; 3Novartis Institutes of Biomedical Research, Novartis, Basle, SwitzerlandAbstract: Tremendous progress has been made over the past decade in the development and refinement of genomic and proteomic technologies for the identification of novel drug targets and molecular signatures associated with clinically important disease states, disease subsets, or differential responses to therapies. The rapid progress in high-throughput technologies has been preceded and paralleled by the elucidation of cytokine networks, followed by the stepwise clinical development of pathway-specific biological therapies that revolutionized the treatment of autoimmune diseases. Together, these advances provide opportunities for a long-anticipated personalized medicine approach to the treatment of autoimmune disease. The ever-increasing numbers of novel, innovative therapies will need to be harnessed wisely to achieve optimal long-term outcomes in as many patients as possible while complying with the demands of health authorities and health care providers for evidence-based, economically sound prescription of these expensive drugs. Genomic and proteomic profiling of patients with autoimmune diseases holds great promise in two major clinical areas: (1 rapid identification of new targets for the development of innovative therapies and (2 identification of patients who will experience optimal benefit and minimal risk from a specific (targeted therapy. In this review, we attempt to capture important recent developments in the application of genomic and proteomic technologies to translational research by discussing informative examples covering a diversity of autoimmune diseases.Keywords: proteomics, genomics, autoimmune diseases, antigen microarrays, 2-Dih, rheumatoid arthritis

  13. Integration of Proteomics, Bioinformatics, and Systems Biology in Traumatic Brain Injury Biomarker Discovery

    Science.gov (United States)

    Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.

    2013-01-01

    Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150

  14. Application of targeted quantitative proteomics analysis in human cerebrospinal fluid using a liquid chromatography matrix-assisted laser desorption/ionization time-of-flight tandem mass spectrometer (LC MALDI TOF/TOF) platform.

    Science.gov (United States)

    Pan, Sheng; Rush, John; Peskind, Elaine R; Galasko, Douglas; Chung, Kathryn; Quinn, Joseph; Jankovic, Joseph; Leverenz, James B; Zabetian, Cyrus; Pan, Catherine; Wang, Yan; Oh, Jung Hun; Gao, Jean; Zhang, Jianpeng; Montine, Thomas; Zhang, Jing

    2008-02-01

    Targeted quantitative proteomics by mass spectrometry aims to selectively detect one or a panel of peptides/proteins in a complex sample and is particularly appealing for novel biomarker verification/validation because it does not require specific antibodies. Here, we demonstrated the application of targeted quantitative proteomics in searching, identifying, and quantifying selected peptides in human cerebrospinal spinal fluid (CSF) using a matrix-assisted laser desorption/ionization time-of-flight tandem mass spectrometer (MALDI TOF/TOF)-based platform. The approach involved two major components: the use of isotopic-labeled synthetic peptides as references for targeted identification and quantification and a highly selective mass spectrometric analysis based on the unique characteristics of the MALDI instrument. The platform provides high confidence for targeted peptide detection in a complex system and can potentially be developed into a high-throughput system. Using the liquid chromatography (LC) MALDI TOF/TOF platform and the complementary identification strategy, we were able to selectively identify and quantify a panel of targeted peptides in the whole proteome of CSF without prior depletion of abundant proteins. The effectiveness and robustness of the approach associated with different sample complexity, sample preparation strategies, as well as mass spectrometric quantification were evaluated. Other issues related to chromatography separation and the feasibility for high-throughput analysis were also discussed. Finally, we applied targeted quantitative proteomics to analyze a subset of previously identified candidate markers in CSF samples of patients with Parkinson's disease (PD) at different stages and Alzheimer's disease (AD) along with normal controls.

  15. Mass Spectrometry-Based Proteomics in Molecular Diagnostics: Discovery of Cancer Biomarkers Using Tissue Culture

    Directory of Open Access Journals (Sweden)

    Debasish Paul

    2013-01-01

    Full Text Available Accurate diagnosis and proper monitoring of cancer patients remain a key obstacle for successful cancer treatment and prevention. Therein comes the need for biomarker discovery, which is crucial to the current oncological and other clinical practices having the potential to impact the diagnosis and prognosis. In fact, most of the biomarkers have been discovered utilizing the proteomics-based approaches. Although high-throughput mass spectrometry-based proteomic approaches like SILAC, 2D-DIGE, and iTRAQ are filling up the pitfalls of the conventional techniques, still serum proteomics importunately poses hurdle in overcoming a wide range of protein concentrations, and also the availability of patient tissue samples is a limitation for the biomarker discovery. Thus, researchers have looked for alternatives, and profiling of candidate biomarkers through tissue culture of tumor cell lines comes up as a promising option. It is a rich source of tumor cell-derived proteins, thereby, representing a wide array of potential biomarkers. Interestingly, most of the clinical biomarkers in use today (CA 125, CA 15.3, CA 19.9, and PSA were discovered through tissue culture-based system and tissue extracts. This paper tries to emphasize the tissue culture-based discovery of candidate biomarkers through various mass spectrometry-based proteomic approaches.

  16. Mass Spectrometry-Based Proteomics in Molecular Diagnostics: Discovery of Cancer Biomarkers Using Tissue Culture

    Science.gov (United States)

    Paul, Debasish; Kumar, Avinash; Gajbhiye, Akshada; Santra, Manas K.; Srikanth, Rapole

    2013-01-01

    Accurate diagnosis and proper monitoring of cancer patients remain a key obstacle for successful cancer treatment and prevention. Therein comes the need for biomarker discovery, which is crucial to the current oncological and other clinical practices having the potential to impact the diagnosis and prognosis. In fact, most of the biomarkers have been discovered utilizing the proteomics-based approaches. Although high-throughput mass spectrometry-based proteomic approaches like SILAC, 2D-DIGE, and iTRAQ are filling up the pitfalls of the conventional techniques, still serum proteomics importunately poses hurdle in overcoming a wide range of protein concentrations, and also the availability of patient tissue samples is a limitation for the biomarker discovery. Thus, researchers have looked for alternatives, and profiling of candidate biomarkers through tissue culture of tumor cell lines comes up as a promising option. It is a rich source of tumor cell-derived proteins, thereby, representing a wide array of potential biomarkers. Interestingly, most of the clinical biomarkers in use today (CA 125, CA 15.3, CA 19.9, and PSA) were discovered through tissue culture-based system and tissue extracts. This paper tries to emphasize the tissue culture-based discovery of candidate biomarkers through various mass spectrometry-based proteomic approaches. PMID:23586059

  17. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  18. Evaluation of Capacity on a High Throughput Vol-oxidizer for Operability

    International Nuclear Information System (INIS)

    Kim, Young Hwan; Park, Geun Il; Lee, Jung Won; Jung, Jae Hoo; Kim, Ki Ho; Lee, Yong Soon; Lee, Do Youn; Kim, Su Sung

    2010-01-01

    KAERI is developing a pyro-process. As a piece of process equipment, a high throughput vol-oxidizer which can handle a several tens kg HM/batch was developed to supply U 3 O 8 powders to an electrolytic reduction(ER) reactor. To increase the reduction yield, UO 2 pellets should be converted into uniform powders. In this paper, we aim at the evaluation of a high throughput vol-oxidizer for operability. The evaluation consisted of 3 targets, a mechanical motion test, a heating test and hull separation test. In order to test a high throughput vol-oxidizer, By using a control system, mechanical motion tests of the vol-oxidizer were conducted, and heating rates were analyzed. Also the separation tests of hulls for recovery rate were conducted. The test results of the vol-oxidizer are going to be applied for operability. A study on the characteristics of the volatile gas produced during a vol-oxidation process is not included in this study

  19. Fun with High Throughput Toxicokinetics (CalEPA webinar)

    Science.gov (United States)

    Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21. These chemicals are tested in part because there are limited or no data on hazard, exposure, or toxicokinetics (TK). TK models aid in predicting tissue concentrations ...

  20. Beyond the Natural Proteome: Nondegenerate Saturation Mutagenesis-Methodologies and Advantages.

    Science.gov (United States)

    Ferreira Amaral, M M; Frigotto, L; Hine, A V

    2017-01-01

    Beyond the natural proteome, high-throughput mutagenesis offers the protein engineer an opportunity to "tweak" the wild-type activity of a protein to create a recombinant protein with required attributes. Of the various approaches available, saturation mutagenesis is one of the core techniques employed by protein engineers, and in recent times, nondegenerate saturation mutagenesis is emerging as the approach of choice. This review compares the current methodologies available for conducting nondegenerate saturation mutagenesis with traditional, degenerate saturation and briefly outlines the options available for screening the resulting libraries, to discover a novel protein with the required activity and/or specificity. © 2017 Elsevier Inc. All rights reserved.

  1. Best Performers Announced for the NCI-CPTAC DREAM Proteogenomics Computational Challenge | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The National Cancer Institute (NCI) Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce that teams led by Jaewoo Kang (Korea University), and Yuanfang Guan with Hongyang Li (University of Michigan) as the best performers of the NCI-CPTAC DREAM Proteogenomics Computational Challenge. Over 500 participants from 20 countries registered for the Challenge, which offered $25,000 in cash awards contributed by the NVIDIA Foundation through its Compute the Cure initiative.

  2. RaftProt: mammalian lipid raft proteome database.

    Science.gov (United States)

    Shah, Anup; Chen, David; Boda, Akash R; Foster, Leonard J; Davis, Melissa J; Hill, Michelle M

    2015-01-01

    RaftProt (http://lipid-raft-database.di.uq.edu.au/) is a database of mammalian lipid raft-associated proteins as reported in high-throughput mass spectrometry studies. Lipid rafts are specialized membrane microdomains enriched in cholesterol and sphingolipids thought to act as dynamic signalling and sorting platforms. Given their fundamental roles in cellular regulation, there is a plethora of information on the size, composition and regulation of these membrane microdomains, including a large number of proteomics studies. To facilitate the mining and analysis of published lipid raft proteomics studies, we have developed a searchable database RaftProt. In addition to browsing the studies, performing basic queries by protein and gene names, searching experiments by cell, tissue and organisms; we have implemented several advanced features to facilitate data mining. To address the issue of potential bias due to biochemical preparation procedures used, we have captured the lipid raft preparation methods and implemented advanced search option for methodology and sample treatment conditions, such as cholesterol depletion. Furthermore, we have identified a list of high confidence proteins, and enabled searching only from this list of likely bona fide lipid raft proteins. Given the apparent biological importance of lipid raft and their associated proteins, this database would constitute a key resource for the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle

  4. High throughput experimentation for the discovery of new catalysts

    International Nuclear Information System (INIS)

    Thomson, S.; Hoffmann, C.; Johann, T.; Wolf, A.; Schmidt, H.-W.; Farrusseng, D.; Schueth, F.

    2002-01-01

    Full text: The use of combinatorial chemistry to obtain new materials has been developed extensively by the pharmaceutical and biochemical industries, but such approaches have been slow to impact on the field of heterogeneous catalysis. The reasons for this lie in with difficulties associated in the synthesis, characterisation and determination of catalytic properties of such materials. In many synthetic and catalytic reactions, the conditions used are difficult to emulate using High Throughput Experimentation (HTE). Furthermore, the ability to screen these catalysts simultaneously in real time, requires the development and/or modification of characterisation methods. Clearly, there is a need for both high throughput synthesis and screening of new and novel reactions, and we describe several new concepts that help to achieve these goals. Although such problems have impeded the development of combinatorial catalysis, the fact remains that many highly attractive processes still exist for which no suitable catalysts have been developed. The ability to decrease the tiFme needed to evaluate catalyst is therefore essential and this makes the use of high throughput techniques highly desirable. In this presentation we will describe the synthesis, catalytic testing, and novel screening methods developed at the Max Planck Institute. Automated synthesis procedures, performed by the use of a modified Gilson pipette robot, will be described, as will the development of two 16 and 49 sample fixed bed reactors and two 25 and 29 sample three phase reactors for catalytic testing. We will also present new techniques for the characterisation of catalysts and catalytic products using standard IR microscopy and infrared focal plane array detection, respectively

  5. Proteomics and Metabolomics: two emerging areas for legume improvement

    Directory of Open Access Journals (Sweden)

    Abirami eRamalingam

    2015-12-01

    Full Text Available The crop legumes such as chickpea, common bean, cowpea, peanut, pigeonpea, soybean, etc. are important source of nutrition and contribute to a significant amount of biological nitrogen fixation (>20 million tons of fixed nitrogen in agriculture. However, the production of legumes is constrained due to abiotic and biotic stresses. It is therefore imperative to understand the molecular mechanisms of plant response to different stresses and identify key candidate genes regulating tolerance which can be deployed in breeding programs. The information obtained from transcriptomics has facilitated the identification of candidate genes for the given trait of interest and utilizing them in crop breeding programs to improve stress tolerance. However, the mechanisms of stress tolerance are complex due to the influence of multi-genes and post-transcriptional regulations. Furthermore, stress conditions greatly affect gene expression which in turn causes modifications in the composition of plant proteomes and metabolomes. Therefore, functional genomics involving various proteomics and metabolomics approaches have been obligatory for understanding plant stress tolerance. These approaches have also been found useful to unravel different pathways related to plant and seed development as well as symbiosis. Proteome and metabolome profiling using high-throughput based systems have been extensively applied in the model legume species Medicago truncatula and Lotus japonicus, as well as in the model crop legume, soybean, to examine stress signalling pathways, cellular and developmental processes and nodule symbiosis. Moreover, the availability of protein reference maps as well as proteomics and metabolomics databases greatly support research and understanding of various biological processes in legumes. Protein-protein interaction techniques, particularly the yeast two-hybrid system have been advantageous for studying symbiosis and stress signalling in legumes. In

  6. Quantitative proteomic analysis for high-throughput screening of differential glycoproteins in hepatocellular carcinoma serum

    International Nuclear Information System (INIS)

    Gao, Hua-Jun; Chen, Ya-Jing; Zuo, Duo; Xiao, Ming-Ming; Li, Ying; Guo, Hua; Zhang, Ning; Chen, Rui-Bing

    2015-01-01

    Hepatocellular carcinoma (HCC) is a leading cause of cancer-related deaths. Novel serum biomarkers are required to increase the sensitivity and specificity of serum screening for early HCC diagnosis. This study employed a quantitative proteomic strategy to analyze the differential expression of serum glycoproteins between HCC and normal control serum samples. Lectin affinity chromatography (LAC) was used to enrich glycoproteins from the serum samples. Quantitative mass spectrometric analysis combined with stable isotope dimethyl labeling and 2D liquid chromatography (LC) separations were performed to examine the differential levels of the detected proteins between HCC and control serum samples. Western blot was used to analyze the differential expression levels of the three serum proteins. A total of 2,280 protein groups were identified in the serum samples from HCC patients by using the 2D LC-MS/MS method. Up to 36 proteins were up-regulated in the HCC serum, whereas 19 proteins were down-regulated. Three differential glycoproteins, namely, fibrinogen gamma chain (FGG), FOS-like antigen 2 (FOSL2), and α-1,6-mannosylglycoprotein 6-β-N-acetylglucosaminyltransferase B (MGAT5B) were validated by Western blot. All these three proteins were up-regulated in the HCC serum samples. A quantitative glycoproteomic method was established and proven useful to determine potential novel biomarkers for HCC

  7. Modeling Steroidogenesis Disruption Using High-Throughput ...

    Science.gov (United States)

    Environmental chemicals can elicit endocrine disruption by altering steroid hormone biosynthesis and metabolism (steroidogenesis) causing adverse reproductive and developmental effects. Historically, a lack of assays resulted in few chemicals having been evaluated for effects on steroidogenesis. The steroidogenic pathway is a series of hydroxylation and dehydrogenation steps carried out by CYP450 and hydroxysteroid dehydrogenase enzymes, yet the only enzyme in the pathway for which a high-throughput screening (HTS) assay has been developed is aromatase (CYP19A1), responsible for the aromatization of androgens to estrogens. Recently, the ToxCast HTS program adapted the OECD validated H295R steroidogenesis assay using human adrenocortical carcinoma cells into a high-throughput model to quantitatively assess the concentration-dependent (0.003-100 µM) effects of chemicals on 10 steroid hormones including progestagens, androgens, estrogens and glucocorticoids. These results, in combination with two CYP19A1 inhibition assays, comprise a large dataset amenable to clustering approaches supporting the identification and characterization of putative mechanisms of action (pMOA) for steroidogenesis disruption. In total, 514 chemicals were tested in all CYP19A1 and steroidogenesis assays. 216 chemicals were identified as CYP19A1 inhibitors in at least one CYP19A1 assay. 208 of these chemicals also altered hormone levels in the H295R assay, suggesting 96% sensitivity in the

  8. Towards low-delay and high-throughput cognitive radio vehicular networks

    Directory of Open Access Journals (Sweden)

    Nada Elgaml

    2017-12-01

    Full Text Available Cognitive Radio Vehicular Ad-hoc Networks (CR-VANETs exploit cognitive radios to allow vehicles to access the unused channels in their radio environment. Thus, CR-VANETs do not only suffer the traditional CR problems, especially spectrum sensing, but also suffer new challenges due to the highly dynamic nature of VANETs. In this paper, we present a low-delay and high-throughput radio environment assessment scheme for CR-VANETs that can be easily incorporated with the IEEE 802.11p standard developed for VANETs. Simulation results show that the proposed scheme significantly reduces the time to get the radio environment map and increases the CR-VANET throughput.

  9. High throughput 16S rRNA gene amplicon sequencing

    DEFF Research Database (Denmark)

    Nierychlo, Marta; Larsen, Poul; Jørgensen, Mads Koustrup

    S rRNA gene amplicon sequencing has been developed over the past few years and is now ready to use for more comprehensive studies related to plant operation and optimization thanks to short analysis time, low cost, high throughput, and high taxonomic resolution. In this study we show how 16S r......RNA gene amplicon sequencing can be used to reveal factors of importance for the operation of full-scale nutrient removal plants related to settling problems and floc properties. Using optimized DNA extraction protocols, indexed primers and our in-house Illumina platform, we prepared multiple samples...... be correlated to the presence of the species that are regarded as “strong” and “weak” floc formers. In conclusion, 16S rRNA gene amplicon sequencing provides a high throughput approach for a rapid and cheap community profiling of activated sludge that in combination with multivariate statistics can be used...

  10. Automated degenerate PCR primer design for high-throughput sequencing improves efficiency of viral sequencing

    Directory of Open Access Journals (Sweden)

    Li Kelvin

    2012-11-01

    Full Text Available Abstract Background In a high-throughput environment, to PCR amplify and sequence a large set of viral isolates from populations that are potentially heterogeneous and continuously evolving, the use of degenerate PCR primers is an important strategy. Degenerate primers allow for the PCR amplification of a wider range of viral isolates with only one set of pre-mixed primers, thus increasing amplification success rates and minimizing the necessity for genome finishing activities. To successfully select a large set of degenerate PCR primers necessary to tile across an entire viral genome and maximize their success, this process is best performed computationally. Results We have developed a fully automated degenerate PCR primer design system that plays a key role in the J. Craig Venter Institute’s (JCVI high-throughput viral sequencing pipeline. A consensus viral genome, or a set of consensus segment sequences in the case of a segmented virus, is specified using IUPAC ambiguity codes in the consensus template sequence to represent the allelic diversity of the target population. PCR primer pairs are then selected computationally to produce a minimal amplicon set capable of tiling across the full length of the specified target region. As part of the tiling process, primer pairs are computationally screened to meet the criteria for successful PCR with one of two described amplification protocols. The actual sequencing success rates for designed primers for measles virus, mumps virus, human parainfluenza virus 1 and 3, human respiratory syncytial virus A and B and human metapneumovirus are described, where >90% of designed primer pairs were able to consistently successfully amplify >75% of the isolates. Conclusions Augmenting our previously developed and published JCVI Primer Design Pipeline, we achieved similarly high sequencing success rates with only minor software modifications. The recommended methodology for the construction of the consensus

  11. Reverse Phase Protein Arrays for High-throughput Toxicity Screening

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    High-throughput screening is extensively applied for identification of drug targets and drug discovery and recently it found entry into toxicity testing. Reverse phase protein arrays (RPPAs) are used widespread for quantification of protein markers. We reasoned that RPPAs also can be utilized...... beneficially in automated high-throughput toxicity testing. An advantage of using RPPAs is that, in addition to the baseline toxicity readout, they allow testing of multiple markers of toxicity, such as inflammatory responses, which do not necessarily cumulate in cell death. We used transfection of si......RNAs with known killing effects as a model system to demonstrate that RPPA-based protein quantification can serve as substitute readout of cell viability, hereby reliably reflecting toxicity. In terms of automation, cell exposure, protein harvest, serial dilution and sample reformatting were performed using...

  12. Use of a Fluorometric Imaging Plate Reader in high-throughput screening

    Science.gov (United States)

    Groebe, Duncan R.; Gopalakrishnan, Sujatha; Hahn, Holly; Warrior, Usha; Traphagen, Linda; Burns, David J.

    1999-04-01

    High-throughput screening (HTS) efforts at Abbott Laboratories have been greatly facilitated by the use of a Fluorometric Imaging Plate Reader. The FLIPR consists of an incubated cabinet with integrated 96-channel pipettor and fluorometer. An argon laser is used to excite fluorophores in a 96-well microtiter plate and the emitted fluorometer. An argon laser is used to excite fluorophores in a 96-well microtiter plate and the emitted fluorescence is imaged by a cooled CCD camera. The image data is downloaded from the camera and processed to average the signal form each well of the microtiter pate for each time point. The data is presented in real time on the computer screen, facilitating interpretation and trouble-shooting. In addition to fluorescence, the camera can also detect luminescence form firefly luciferase.

  13. Development of automatic image analysis methods for high-throughput and high-content screening

    NARCIS (Netherlands)

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  14. Quantitative in vitro-to-in vivo extrapolation in a high-throughput environment

    International Nuclear Information System (INIS)

    Wetmore, Barbara A.

    2015-01-01

    High-throughput in vitro toxicity screening provides an efficient way to identify potential biological targets for environmental and industrial chemicals while conserving limited testing resources. However, reliance on the nominal chemical concentrations in these in vitro assays as an indicator of bioactivity may misrepresent potential in vivo effects of these chemicals due to differences in clearance, protein binding, bioavailability, and other pharmacokinetic factors. Development of high-throughput in vitro hepatic clearance and protein binding assays and refinement of quantitative in vitro-to-in vivo extrapolation (QIVIVE) methods have provided key tools to predict xenobiotic steady state pharmacokinetics. Using a process known as reverse dosimetry, knowledge of the chemical steady state behavior can be incorporated with HTS data to determine the external in vivo oral exposure needed to achieve internal blood concentrations equivalent to those eliciting bioactivity in the assays. These daily oral doses, known as oral equivalents, can be compared to chronic human exposure estimates to assess whether in vitro bioactivity would be expected at the dose-equivalent level of human exposure. This review will describe the use of QIVIVE methods in a high-throughput environment and the promise they hold in shaping chemical testing priorities and, potentially, high-throughput risk assessment strategies

  15. Using high throughput experimental data and in silico models to discover alternatives to toxic chromate corrosion inhibitors

    International Nuclear Information System (INIS)

    Winkler, D.A.; Breedon, M.; White, P.; Hughes, A.E.; Sapper, E.D.; Cole, I.

    2016-01-01

    Highlights: • We screened a large library of organic compounds as replacements for toxic chromates. • High throughput automated corrosion testing was used to assess inhibitor performance. • Robust, predictive machine learning models of corrosion inhibition were developed. • Models indicated molecular features contributing to performance of organic inhibitors. • We also showed that quantum chemistry descriptors do not correlate with performance. - Abstract: Restrictions on the use of toxic chromate-based corrosion inhibitors have created important issues for the aerospace and other industries. Benign alternatives that offer similar or superior performance are needed. We used high throughput experiments to assess 100 small organic molecules as potential inhibitors of corrosion in aerospace aluminium alloys AA2024 and AA7075. We generated robust, predictive, quantitative computational models of inhibitor efficiency at two pH values using these data. The models identified molecular features of inhibitor molecules that had the greatest impact on corrosion inhibition. Models can be used to discover better corrosion inhibitors by screening libraries of organic compounds for candidates with high corrosion inhibition.

  16. High-throughput gated photon counter with two detection windows programmable down to 70 ps width

    Energy Technology Data Exchange (ETDEWEB)

    Boso, Gianluca; Tosi, Alberto, E-mail: alberto.tosi@polimi.it; Zappa, Franco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy); Mora, Alberto Dalla [Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano (Italy)

    2014-01-15

    We present the design and characterization of a high-throughput gated photon counter able to count electrical pulses occurring within two well-defined and programmable detection windows. We extensively characterized and validated this instrument up to 100 Mcounts/s and with detection window width down to 70 ps. This instrument is suitable for many applications and proves to be a cost-effective and compact alternative to time-correlated single-photon counting equipment, thanks to its easy configurability, user-friendly interface, and fully adjustable settings via a Universal Serial Bus (USB) link to a remote computer.

  17. High-throughput gated photon counter with two detection windows programmable down to 70 ps width

    International Nuclear Information System (INIS)

    Boso, Gianluca; Tosi, Alberto; Zappa, Franco; Mora, Alberto Dalla

    2014-01-01

    We present the design and characterization of a high-throughput gated photon counter able to count electrical pulses occurring within two well-defined and programmable detection windows. We extensively characterized and validated this instrument up to 100 Mcounts/s and with detection window width down to 70 ps. This instrument is suitable for many applications and proves to be a cost-effective and compact alternative to time-correlated single-photon counting equipment, thanks to its easy configurability, user-friendly interface, and fully adjustable settings via a Universal Serial Bus (USB) link to a remote computer

  18. Quality control methodology for high-throughput protein-protein interaction screening.

    Science.gov (United States)

    Vazquez, Alexei; Rual, Jean-François; Venkatesan, Kavitha

    2011-01-01

    Protein-protein interactions are key to many aspects of the cell, including its cytoskeletal structure, the signaling processes in which it is involved, or its metabolism. Failure to form protein complexes or signaling cascades may sometimes translate into pathologic conditions such as cancer or neurodegenerative diseases. The set of all protein interactions between the proteins encoded by an organism constitutes its protein interaction network, representing a scaffold for biological function. Knowing the protein interaction network of an organism, combined with other sources of biological information, can unravel fundamental biological circuits and may help better understand the molecular basics of human diseases. The protein interaction network of an organism can be mapped by combining data obtained from both low-throughput screens, i.e., "one gene at a time" experiments and high-throughput screens, i.e., screens designed to interrogate large sets of proteins at once. In either case, quality controls are required to deal with the inherent imperfect nature of experimental assays. In this chapter, we discuss experimental and statistical methodologies to quantify error rates in high-throughput protein-protein interactions screens.

  19. High-throughput screening to enhance oncolytic virus immunotherapy

    Directory of Open Access Journals (Sweden)

    Allan KJ

    2016-04-01

    Full Text Available KJ Allan,1,2 David F Stojdl,1–3 SL Swift1 1Children’s Hospital of Eastern Ontario (CHEO Research Institute, 2Department of Biology, Microbiology and Immunology, 3Department of Pediatrics, University of Ottawa, Ottawa, ON, Canada Abstract: High-throughput screens can rapidly scan and capture large amounts of information across multiple biological parameters. Although many screens have been designed to uncover potential new therapeutic targets capable of crippling viruses that cause disease, there have been relatively few directed at improving the efficacy of viruses that are used to treat disease. Oncolytic viruses (OVs are biotherapeutic agents with an inherent specificity for treating malignant disease. Certain OV platforms – including those based on herpes simplex virus, reovirus, and vaccinia virus – have shown success against solid tumors in advanced clinical trials. Yet, many of these OVs have only undergone minimal engineering to solidify tumor specificity, with few extra modifications to manipulate additional factors. Several aspects of the interaction between an OV and a tumor-bearing host have clear value as targets to improve therapeutic outcomes. At the virus level, these include delivery to the tumor, infectivity, productivity, oncolysis, bystander killing, spread, and persistence. At the host level, these include engaging the immune system and manipulating the tumor microenvironment. Here, we review the chemical- and genome-based high-throughput screens that have been performed to manipulate such parameters during OV infection and analyze their impact on therapeutic efficacy. We further explore emerging themes that represent key areas of focus for future research. Keywords: oncolytic, virus, screen, high-throughput, cancer, chemical, genomic, immunotherapy

  20. Current status, new frontiers and challenges in radiation biodosimetry using cytogenetic, transcriptomic and proteomic technologies

    Energy Technology Data Exchange (ETDEWEB)

    Fenech, Michael, E-mail: michael.fenech@csiro.au [Commonwealth Scientific and Industrial Research Organisation, Gate 13 Kintore Avenue, Adelaide, SA 5000 (Australia)

    2011-09-15

    Biodosimetric methods for determining exposure dose in individuals following a radiation accident are important for the health management of the exposed cohort and prioritisation of high dose exposure cases to receive emergency medical treatment. This brief review provides a succinct outline of (i) the current status of standard cytogenetic methods used in radiation biodosimetry; (ii) development of high-throughput systems for current standard cytogenetic methods; (iii) emerging minimally invasive methods; (iv) the impact of nutrition and genotype on observed dose-response relationships and (v) new frontiers in biodosimetry using molecular biology techniques such as transcriptomics and proteomics.

  1. High throughput electrophysiology: new perspectives for ion channel drug discovery

    DEFF Research Database (Denmark)

    Willumsen, Niels J; Bech, Morten; Olesen, Søren-Peter

    2003-01-01

    Proper function of ion channels is crucial for all living cells. Ion channel dysfunction may lead to a number of diseases, so-called channelopathies, and a number of common diseases, including epilepsy, arrhythmia, and type II diabetes, are primarily treated by drugs that modulate ion channels....... A cornerstone in current drug discovery is high throughput screening assays which allow examination of the activity of specific ion channels though only to a limited extent. Conventional patch clamp remains the sole technique with sufficiently high time resolution and sensitivity required for precise and direct...... characterization of ion channel properties. However, patch clamp is a slow, labor-intensive, and thus expensive, technique. New techniques combining the reliability and high information content of patch clamping with the virtues of high throughput philosophy are emerging and predicted to make a number of ion...

  2. High throughput screening of starch structures using carbohydrate microarrays

    DEFF Research Database (Denmark)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg

    2016-01-01

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...

  3. Achieving high data throughput in research networks

    International Nuclear Information System (INIS)

    Matthews, W.; Cottrell, L.

    2001-01-01

    After less than a year of operation, the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB. Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory (LLNL). BaBar collaborators plan to double data collection each year and export a third of the data to IN2P3. So within a few years the SLAC OC3 (155 Mbps) connection will be fully utilized by file transfer to France alone. Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical. In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed. Methods for achieving the ambitious requirements will be discussed

  4. Achieving High Data Throughput in Research Networks

    International Nuclear Information System (INIS)

    Matthews, W

    2004-01-01

    After less than a year of operation, the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB. Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory (LLNL). BaBar collaborators plan to double data collection each year and export a third of the data to IN2P3. So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone. Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical. In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed. Methods for achieving the ambitious requirements will be discussed

  5. Moving Toward Integrating Gene Expression Profiling into High-throughput Testing:A Gene Expression Biomarker Accurately Predicts Estrogen Receptor α Modulation in a Microarray Compendium

    Science.gov (United States)

    Microarray profiling of chemical-induced effects is being increasingly used in medium and high-throughput formats. In this study, we describe computational methods to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), ...

  6. A priori Considerations When Conducting High-Throughput Amplicon-Based Sequence Analysis

    Directory of Open Access Journals (Sweden)

    Aditi Sengupta

    2016-03-01

    Full Text Available Amplicon-based sequencing strategies that include 16S rRNA and functional genes, alongside “meta-omics” analyses of communities of microorganisms, have allowed researchers to pose questions and find answers to “who” is present in the environment and “what” they are doing. Next-generation sequencing approaches that aid microbial ecology studies of agricultural systems are fast gaining popularity among agronomy, crop, soil, and environmental science researchers. Given the rapid development of these high-throughput sequencing techniques, researchers with no prior experience will desire information about the best practices that can be used before actually starting high-throughput amplicon-based sequence analyses. We have outlined items that need to be carefully considered in experimental design, sampling, basic bioinformatics, sequencing of mock communities and negative controls, acquisition of metadata, and in standardization of reaction conditions as per experimental requirements. Not all considerations mentioned here may pertain to a particular study. The overall goal is to inform researchers about considerations that must be taken into account when conducting high-throughput microbial DNA sequencing and sequences analysis.

  7. Proteomics of Neisseria gonorrhoeae: the treasure hunt for countermeasures against an old disease

    Directory of Open Access Journals (Sweden)

    Benjamin I Baarda

    2015-10-01

    Full Text Available Neisseria gonorrhoeae is an exquisitely adapted, strictly human pathogen and the causative agent of the sexually transmitted infection gonorrhea. This ancient human disease remains a serious problem, occurring at high incidence globally and having a major impact on reproductive and neonatal health. N. gonorrhoeae is rapidly evolving into a superbug and no effective vaccine exists to prevent gonococcal infections. Untreated or inadequately treated gonorrhea can lead to severe sequelae, including pelvic inflammatory disease and infertility in women, epididymitis in men, and sight- threatening conjunctivitis in infants born to infected mothers. Therefore, there is an immediate need for accelerated research toward the identification of molecular targets for development of drugs with new mechanisms of action and preventive vaccine(s. Global proteomic approaches are ideally suited to guide these studies. Recent quantitative proteomics (SILAC, iTRAQ, and ICAT have illuminated the pathways utilized by N. gonorrhoeae to adapt to different lifestyles and micro-ecological niches within the host, while comparative 2D SDS-PAGE analysis has been used to elucidate spectinomycin resistance mechanisms. Further, high-throughput examinations of cell envelopes and naturally released membrane vesicles have unveiled the ubiquitous and differentially expressed proteins between temporally and geographically diverse N. gonorrhoeae isolates. This review will focus on these different approaches, emphasizing the role of proteomics in the search for vaccine candidates. Although our knowledge of N. gonorrhoeae has been expanded, still far less is known about this bacterium than the closely related N. meningitidis, where genomics- and proteomics-driven studies have led to the successful development of vaccines.

  8. Enzyme free cloning for high throughput gene cloning and expression

    NARCIS (Netherlands)

    de Jong, R.N.; Daniëls, M.; Kaptein, R.; Folkers, G.E.

    2006-01-01

    Structural and functional genomics initiatives significantly improved cloning methods over the past few years. Although recombinational cloning is highly efficient, its costs urged us to search for an alternative high throughput (HTP) cloning method. We implemented a modified Enzyme Free Cloning

  9. High-throughput design of low-activation, high-strength creep-resistant steels for nuclear-reactor applications

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Qi; Zwaag, Sybrand van der [Novel Aerospace Materials Group, Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629 HS, Delft (Netherlands); Xu, Wei, E-mail: xuwei@ral.neu.edu.cn [State Key Laboratory of Rolling and Automation, Northeastern University, 110819, Shenyang (China); Novel Aerospace Materials Group, Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629 HS, Delft (Netherlands)

    2016-02-15

    Reduced-activation ferritic/martensitic steels are prime candidate materials for structural applications in nuclear power reactors. However, their creep strength is much lower than that of creep-resistant steel developed for conventional fossil-fired power plants as alloying elements with a high neutron activation cannot be used. To improve the creep strength and to maintain a low activation, a high-throughput computational alloy design model coupling thermodynamics, precipitate-coarsening kinetics and an optimization genetic algorithm, is developed. Twelve relevant alloying elements with either low or high activation are considered simultaneously. The activity levels at 0–10 year after the end of irradiation are taken as optimization parameter. The creep-strength values (after exposure for 10 years at 650 °C) are estimated on the basis of the solid-solution strengthening and the precipitation hardening (taking into account precipitate coarsening). Potential alloy compositions leading to a high austenite fraction or a high percentage of undesirable second phase particles are rejected automatically in the optimization cycle. The newly identified alloys have a much higher precipitation hardening and solid-solution strengthening at the same activity level as existing reduced-activation ferritic/martensitic steels.

  10. High-Throughput Analysis of Enzyme Activities

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Guoxin [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different

  11. A rapid enzymatic assay for high-throughput screening of adenosine-producing strains

    Science.gov (United States)

    Dong, Huina; Zu, Xin; Zheng, Ping; Zhang, Dawei

    2015-01-01

    Adenosine is a major local regulator of tissue function and industrially useful as precursor for the production of medicinal nucleoside substances. High-throughput screening of adenosine overproducers is important for industrial microorganism breeding. An enzymatic assay of adenosine was developed by combined adenosine deaminase (ADA) with indophenol method. The ADA catalyzes the cleavage of adenosine to inosine and NH3, the latter can be accurately determined by indophenol method. The assay system was optimized to deliver a good performance and could tolerate the addition of inorganic salts and many nutrition components to the assay mixtures. Adenosine could be accurately determined by this assay using 96-well microplates. Spike and recovery tests showed that this assay can accurately and reproducibly determine increases in adenosine in fermentation broth without any pretreatment to remove proteins and potentially interfering low-molecular-weight molecules. This assay was also applied to high-throughput screening for high adenosine-producing strains. The high selectivity and accuracy of the ADA assay provides rapid and high-throughput analysis of adenosine in large numbers of samples. PMID:25580842

  12. eRNA: a graphic user interface-based tool optimized for large data analysis from high-throughput RNA sequencing.

    Science.gov (United States)

    Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang

    2014-03-05

    RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.

  13. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  14. High-throughput full-automatic synchrotron-based tomographic microscopy

    International Nuclear Information System (INIS)

    Mader, Kevin; Marone, Federica; Hintermueller, Christoph; Mikuljan, Gordan; Isenegger, Andreas; Stampanoni, Marco

    2011-01-01

    At the TOMCAT (TOmographic Microscopy and Coherent rAdiology experimenTs) beamline of the Swiss Light Source with an energy range of 8-45 keV and voxel size from 0.37 (micro)m to 7.4 (micro)m, full tomographic datasets are typically acquired in 5 to 10 min. To exploit the speed of the system and enable high-throughput studies to be performed in a fully automatic manner, a package of automation tools has been developed. The samples are automatically exchanged, aligned, moved to the correct region of interest, and scanned. This task is accomplished through the coordination of Python scripts, a robot-based sample-exchange system, sample positioning motors and a CCD camera. The tools are suited for any samples that can be mounted on a standard SEM stub, and require no specific environmental conditions. Up to 60 samples can be analyzed at a time without user intervention. The throughput of the system is dependent on resolution, energy and sample size, but rates of four samples per hour have been achieved with 0.74 (micro)m voxel size at 17.5 keV. The maximum intervention-free scanning time is theoretically unlimited, and in practice experiments have been running unattended as long as 53 h (the average beam time allocation at TOMCAT is 48 h per user). The system is the first fully automated high-throughput tomography station: mounting samples, finding regions of interest, scanning and reconstructing can be performed without user intervention. The system also includes many features which accelerate and simplify the process of tomographic microscopy.

  15. Proteomic analysis reveals strong mitochondrial involvement in cytoplasmic male sterility of pepper (Capsicum annuum L.).

    Science.gov (United States)

    Guo, Jinju; Wang, Peng; Cheng, Qing; Sun, Limin; Wang, Hongyu; Wang, Yutong; Kao, Lina; Li, Yanan; Qiu, Tuoyu; Yang, Wencai; Shen, Huolin

    2017-09-25

    Although cytoplasmic male sterility (CMS) is widely used for developing pepper hybrids, its molecular mechanism remains unclear. In this study, we used a high-throughput proteomics method called label-free to compare protein abundance across a pepper CMS line (A-line) and its isogenic maintainer line (B-line). Data are available via ProteomeXchange with identifier PXD006104. Approximately 324 differentially abundant protein species were identified and quantified; among which, 47 were up-accumulated and 140 were down-accumulated in the A-line; additionally, 75 and 62 protein species were specifically accumulated in the A-line and B-line, respectively. Protein species involved in pollen exine formation, pyruvate metabolic processes, the tricarboxylic acid cycle, the mitochondrial electron transport chain, and oxidative stress response were observed to be differentially accumulated between A-line and B-line, suggesting their potential roles in the regulation of pepper pollen abortion. Based on our data, we proposed a potential regulatory network for pepper CMS that unifies these processes. Artificial emasculation is a major obstacle in pepper hybrid breeding for its high labor cost and poor seed purity. While the use of cytoplasmic male sterility (CMS) in hybrid system is seriously frustrated because a long time is needed to cultivate male sterility line and its isogenic restore line. Transgenic technology is an effective and rapid method to obtain male sterility lines and its widely application has very important significance in speeding up breeding process in pepper. Although numerous studies have been conducted to select the genes related to male sterility, the molecular mechanism of cytoplasmic male sterility in pepper remains unknown. In this study, we used the high-throughput proteomic method called "label-free", coupled with liquid chromatography-quadrupole mass spectrometry (LC-MS/MS), to perform a novel comparison of expression profiles in a CMS pepper line

  16. High-throughput characterization for solar fuels materials discovery

    Science.gov (United States)

    Mitrovic, Slobodan; Becerra, Natalie; Cornell, Earl; Guevarra, Dan; Haber, Joel; Jin, Jian; Jones, Ryan; Kan, Kevin; Marcin, Martin; Newhouse, Paul; Soedarmadji, Edwin; Suram, Santosh; Xiang, Chengxiang; Gregoire, John; High-Throughput Experimentation Team

    2014-03-01

    In this talk I will present the status of the High-Throughput Experimentation (HTE) project of the Joint Center for Artificial Photosynthesis (JCAP). JCAP is an Energy Innovation Hub of the U.S. Department of Energy with a mandate to deliver a solar fuel generator based on an integrated photoelectrochemical cell (PEC). However, efficient and commercially viable catalysts or light absorbers for the PEC do not exist. The mission of HTE is to provide the accelerated discovery through combinatorial synthesis and rapid screening of material properties. The HTE pipeline also features high-throughput material characterization using x-ray diffraction and x-ray photoemission spectroscopy (XPS). In this talk I present the currently operating pipeline and focus on our combinatorial XPS efforts to build the largest free database of spectra from mixed-metal oxides, nitrides, sulfides and alloys. This work was performed at Joint Center for Artificial Photosynthesis, a DOE Energy Innovation Hub, supported through the Office of Science of the U.S. Department of Energy under Award No. DE-SC0004993.

  17. Fluorescent foci quantitation for high-throughput analysis

    Directory of Open Access Journals (Sweden)

    Elena Ledesma-Fernández

    2015-06-01

    Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.

  18. High-throughput literature mining to support read-across ...

    Science.gov (United States)

    Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie

  19. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    Science.gov (United States)

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  20. High-throughput electrical characterization for robust overlay lithography control

    Science.gov (United States)

    Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.

    2017-03-01

    Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.

  1. High-throughput Sequencing Based Immune Repertoire Study during Infectious Disease

    Directory of Open Access Journals (Sweden)

    Dongni Hou

    2016-08-01

    Full Text Available The selectivity of the adaptive immune response is based on the enormous diversity of T and B cell antigen-specific receptors. The immune repertoire, the collection of T and B cells with functional diversity in the circulatory system at any given time, is dynamic and reflects the essence of immune selectivity. In this article, we review the recent advances in immune repertoire study of infectious diseases that achieved by traditional techniques and high-throughput sequencing techniques. High-throughput sequencing techniques enable the determination of complementary regions of lymphocyte receptors with unprecedented efficiency and scale. This progress in methodology enhances the understanding of immunologic changes during pathogen challenge, and also provides a basis for further development of novel diagnostic markers, immunotherapies and vaccines.

  2. Protein identification and quantification from riverbank grape, Vitis riparia: Comparing SDS-PAGE and FASP-GPF techniques for shotgun proteomic analysis.

    Science.gov (United States)

    George, Iniga S; Fennell, Anne Y; Haynes, Paul A

    2015-09-01

    Protein sample preparation optimisation is critical for establishing reproducible high throughput proteomic analysis. In this study, two different fractionation sample preparation techniques (in-gel digestion and in-solution digestion) for shotgun proteomics were used to quantitatively compare proteins identified in Vitis riparia leaf samples. The total number of proteins and peptides identified were compared between filter aided sample preparation (FASP) coupled with gas phase fractionation (GPF) and SDS-PAGE methods. There was a 24% increase in the total number of reproducibly identified proteins when FASP-GPF was used. FASP-GPF is more reproducible, less expensive and a better method than SDS-PAGE for shotgun proteomics of grapevine samples as it significantly increases protein identification across biological replicates. Total peptide and protein information from the two fractionation techniques is available in PRIDE with the identifier PXD001399 (http://proteomecentral.proteomexchange.org/dataset/PXD001399). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)

    2012-11-15

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  4. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    International Nuclear Information System (INIS)

    Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van

    2012-01-01

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  5. DockoMatic: automated peptide analog creation for high throughput virtual screening.

    Science.gov (United States)

    Jacob, Reed B; Bullock, Casey W; Andersen, Tim; McDougal, Owen M

    2011-10-01

    The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three-dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha-conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X-ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol(-1), and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week. Copyright © 2011 Wiley Periodicals, Inc.

  6. Cancer panomics: computational methods and infrastructure for integrative analysis of cancer high-throughput "omics" data

    DEFF Research Database (Denmark)

    Brunak, Søren; De La Vega, Francisco M.; Rätsch, Gunnar

    2014-01-01

    Targeted cancer treatment is becoming the goal of newly developed oncology medicines and has already shown promise in some spectacular cases such as the case of BRAF kinase inhibitors in BRAF-mutant (e.g. V600E) melanoma. These developments are driven by the advent of high-throughput sequencing......, which continues to drop in cost, and that has enabled the sequencing of the genome, transcriptome, and epigenome of the tumors of a large number of cancer patients in order to discover the molecular aberrations that drive the oncogenesis of several types of cancer. Applying these technologies...... in the clinic promises to transform cancer treatment by identifying therapeutic vulnerabilities of each patient's tumor. These approaches will need to address the panomics of cancer--the integration of the complex combination of patient-specific characteristics that drive the development of each person's tumor...

  7. A high throughput array microscope for the mechanical characterization of biomaterials

    Science.gov (United States)

    Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard

    2015-02-01

    In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.

  8. High-throughput bioinformatics with the Cyrille2 pipeline system.

    NARCIS (Netherlands)

    Fiers, M.W.E.J.; Burgt, van der A.; Datema, E.; Groot, de J.C.W.; Ham, van R.C.H.J.

    2008-01-01

    Background - Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses

  9. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. High-throughput preparation and testing of ion-exchanged zeolites

    International Nuclear Information System (INIS)

    Janssen, K.P.F.; Paul, J.S.; Sels, B.F.; Jacobs, P.A.

    2007-01-01

    A high-throughput research platform was developed for the preparation and subsequent catalytic liquid-phase screening of ion-exchanged zeolites, for instance with regard to their use as heterogeneous catalysts. In this system aqueous solutions and other liquid as well as solid reagents are employed as starting materials and 24 samples are prepared on a library plate with a 4 x 6 layout. Volumetric dispensing of metal precursor solutions, weighing of zeolite and subsequent mixing/washing cycles of the starting materials and distributing reaction mixtures to the library plate are automatically performed by liquid and solid handlers controlled by a single common and easy-to-use programming software interface. The thus prepared materials are automatically contacted with reagent solutions, heated, stirred and sampled continuously using a modified liquid handling. The high-throughput platform is highly promising in enhancing synthesis of catalysts and their screening. In this paper the preparation of lanthanum-exchanged NaY zeolites (LaNaY) on the platform is reported, along with their use as catalyst for the conversion of renewables

  11. A Self-Reporting Photocatalyst for Online Fluorescence Monitoring of High Throughput RAFT Polymerization.

    Science.gov (United States)

    Yeow, Jonathan; Joshi, Sanket; Chapman, Robert; Boyer, Cyrille Andre Jean Marie

    2018-04-25

    Translating controlled/living radical polymerization (CLRP) from batch to the high throughput production of polymer libraries presents several challenges in terms of both polymer synthesis and characterization. Although recently there have been significant advances in the field of low volume, high throughput CLRP, techniques able to simultaneously monitor multiple polymerizations in an "online" manner have not yet been developed. Here, we report our discovery that 5,10,15,20-tetraphenyl-21H,23H-porphine zinc (ZnTPP) is a self-reporting photocatalyst that can mediate PET-RAFT polymerization as well as report on monomer conversion via changes in its fluorescence properties. This enables the use of a microplate reader to conduct high throughput "online" monitoring of PET-RAFT polymerizations performed directly in 384-well, low volume microtiter plates. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. The APEX Quantitative Proteomics Tool: Generating protein quantitation estimates from LC-MS/MS proteomics results

    Directory of Open Access Journals (Sweden)

    Saeed Alexander I

    2008-12-01

    utility to merge multiple APEX results into a standardized format in preparation for further statistical analysis. Conclusion The APEX Quantitative Proteomics Tool provides a simple means to quickly derive hundreds to thousands of protein abundance values from standard liquid chromatography-tandem mass spectrometry proteomics datasets. The APEX tool provides a straightforward intuitive interface design overlaying a highly customizable computational workflow to produce protein abundance values from LC-MS/MS datasets.

  13. Insight into dynamic genome imaging: Canonical framework identification and high-throughput analysis.

    Science.gov (United States)

    Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John

    2017-07-01

    The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. An Extremely Halophilic Proteobacterium Combines a Highly Acidic Proteome with a Low Cytoplasmic Potassium Content*

    Science.gov (United States)

    Deole, Ratnakar; Challacombe, Jean; Raiford, Douglas W.; Hoff, Wouter D.

    2013-01-01

    Halophilic archaea accumulate molar concentrations of KCl in their cytoplasm as an osmoprotectant and have evolved highly acidic proteomes that function only at high salinity. We examined osmoprotection in the photosynthetic Proteobacteria Halorhodospira halophila and Halorhodospira halochloris. Genome sequencing and isoelectric focusing gel electrophoresis showed that the proteome of H. halophila is acidic. In line with this finding, H. halophila accumulated molar concentrations of KCl when grown in high salt medium as detected by x-ray microanalysis and plasma emission spectrometry. This result extends the taxonomic range of organisms using KCl as a main osmoprotectant to the Proteobacteria. The closely related organism H. halochloris does not exhibit an acidic proteome, matching its inability to accumulate K+. This observation indicates recent evolutionary changes in the osmoprotection strategy of these organisms. Upon growth of H. halophila in low salt medium, its cytoplasmic K+ content matches that of Escherichia coli, revealing an acidic proteome that can function in the absence of high cytoplasmic salt concentrations. These findings necessitate a reassessment of two central aspects of theories for understanding extreme halophiles. First, we conclude that proteome acidity is not driven by stabilizing interactions between K+ ions and acidic side chains but by the need for maintaining sufficient solvation and hydration of the protein surface at high salinity through strongly hydrated carboxylates. Second, we propose that obligate protein halophilicity is a non-adaptive property resulting from genetic drift in which constructive neutral evolution progressively incorporates weakly stabilizing K+-binding sites on an increasingly acidic protein surface. PMID:23144460

  15. Quantitative proteomic analysis of microdissected oral epithelium for cancer biomarker discovery.

    Science.gov (United States)

    Xiao, Hua; Langerman, Alexander; Zhang, Yan; Khalid, Omar; Hu, Shen; Cao, Cheng-Xi; Lingen, Mark W; Wong, David T W

    2015-11-01

    Specific biomarkers are urgently needed for the detection and progression of oral cancer. The objective of this study was to discover cancer biomarkers from oral epithelium through utilizing high throughput quantitative proteomics approaches. Morphologically malignant, epithelial dysplasia, and adjacent normal epithelial tissues were laser capture microdissected (LCM) from 19 patients and used for proteomics analysis. Total proteins from each group were extracted, digested and then labelled with corresponding isobaric tags for relative and absolute quantitation (iTRAQ). Labelled peptides from each sample were combined and analyzed by liquid chromatography-mass spectrometry (LC-MS/MS) for protein identification and quantification. In total, 500 proteins were identified and 425 of them were quantified. When compared with adjacent normal oral epithelium, 17 and 15 proteins were consistently up-regulated or down-regulated in malignant and epithelial dysplasia, respectively. Half of these candidate biomarkers were discovered for oral cancer for the first time. Cornulin was initially confirmed in tissue protein extracts and was further validated in tissue microarray. Its presence in the saliva of oral cancer patients was also explored. Myoglobin and S100A8 were pre-validated by tissue microarray. These data demonstrated that the proteomic biomarkers discovered through this strategy are potential targets for oral cancer detection and salivary diagnostics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. High-throughput fragment screening by affinity LC-MS.

    Science.gov (United States)

    Duong-Thi, Minh-Dao; Bergström, Maria; Fex, Tomas; Isaksson, Roland; Ohlson, Sten

    2013-02-01

    Fragment screening, an emerging approach for hit finding in drug discovery, has recently been proven effective by its first approved drug, vemurafenib, for cancer treatment. Techniques such as nuclear magnetic resonance, surface plasmon resonance, and isothemal titration calorimetry, with their own pros and cons, have been employed for screening fragment libraries. As an alternative approach, screening based on high-performance liquid chromatography separation has been developed. In this work, we present weak affinity LC/MS as a method to screen fragments under high-throughput conditions. Affinity-based capillary columns with immobilized thrombin were used to screen a collection of 590 compounds from a fragment library. The collection was divided into 11 mixtures (each containing 35 to 65 fragments) and screened by MS detection. The primary screening was performed in 3500 fragments per day). Thirty hits were defined, which subsequently entered a secondary screening using an active site-blocked thrombin column for confirmation of specificity. One hit showed selective binding to thrombin with an estimated dissociation constant (K (D)) in the 0.1 mM range. This study shows that affinity LC/MS is characterized by high throughput, ease of operation, and low consumption of target and fragments, and therefore it promises to be a valuable method for fragment screening.

  17. AOPs and Biomarkers: Bridging High Throughput Screening ...

    Science.gov (United States)

    As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will benefit from additional data sources that connect the magnitude of perturbation from the in vitro system to a level of concern at the organism or population level. The adverse outcome pathway (AOP) concept provides an ideal framework for combining these complementary data. Recent international efforts under the auspices of the Organization for Economic Co-operation and Development (OECD) have resulted in an AOP wiki designed to house formal descriptions of AOPs suitable for use in regulatory decision making. Recent efforts have built upon this to include an ontology describing the AOP with linkages to biological pathways, physiological terminology, and taxonomic applicability domains. Incorporation of an AOP network tool developed by the U.S. Army Corps of Engineers also allows consideration of cumulative risk from chemical and non-chemical stressors. Biomarkers are an important complement to formal AOP descriptions, particularly when dealing with susceptible subpopulations or lifestages in human health risk assessment. To address the issue of nonchemical stressors than may modify effects of criteria air pollutants, a novel method was used to integrate blood gene expression data with hema

  18. Links among available integral benchmarks and differential date evaluations, computational biases and uncertainties, and nuclear criticality safety biases on potential MOX production throughput

    International Nuclear Information System (INIS)

    Goluoglu, S.; Hopper, C.M.

    2004-01-01

    Through the use of Oak Ridge National Laboratory's recently developed and applied sensitivity and uncertainty computational analysis techniques, this paper presents the relevance and importance of available and needed integral benchmarks and differential data evaluations impacting potential MOX production throughput determinations relative to low-moderated MOX fuel blending operations. The relevance and importance in the availability of or need for critical experiment benchmarks and data evaluations are presented in terms of computational biases as influenced by computational and experimental sensitivities and uncertainties relative to selected MOX production powder blending processes. Recent developments for estimating the safe margins of subcriticality for assuring nuclear criticality safety for process approval are presented. In addition, the impact of the safe margins (due to computational biases and uncertainties) on potential MOX production throughput will also be presented. (author)

  19. Proteomic evidences for rex regulation of metabolism in toxin-producing Bacillus cereus ATCC 14579.

    Directory of Open Access Journals (Sweden)

    Sabrina Laouami

    Full Text Available The facultative anaerobe, Bacillus cereus, causes diarrheal diseases in humans. Its ability to deal with oxygen availability is recognized to be critical for pathogenesis. The B. cereus genome comprises a gene encoding a protein with high similarities to the redox regulator, Rex, which is a central regulator of anaerobic metabolism in Bacillus subtilis and other Gram-positive bacteria. Here, we showed that B. cereus rex is monocistronic and down-regulated in the absence of oxygen. The protein encoded by rex is an authentic Rex transcriptional factor since its DNA binding activity depends on the NADH/NAD+ ratio. Rex deletion compromised the ability of B. cereus to cope with external oxidative stress under anaerobiosis while increasing B. cereus resistance against such stress under aerobiosis. The deletion of rex affects anaerobic fermentative and aerobic respiratory metabolism of B. cereus by decreasing and increasing, respectively, the carbon flux through the NADH-recycling lactate pathway. We compared both the cellular proteome and exoproteome of the wild-type and Δrex cells using a high throughput shotgun label-free quantitation approach and identified proteins that are under control of Rex-mediated regulation. Proteomics data have been deposited to the ProteomeXchange with identifier PXD000886. The data suggest that Rex regulates both the cross-talk between metabolic pathways that produce NADH and NADPH and toxinogenesis, especially in oxic conditions.

  20. Multiplex enrichment quantitative PCR (ME-qPCR): a high-throughput, highly sensitive detection method for GMO identification.

    Science.gov (United States)

    Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang

    2017-04-01

    Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for

  1. A gas trapping method for high-throughput metabolic experiments.

    Science.gov (United States)

    Krycer, James R; Diskin, Ciana; Nelson, Marin E; Zeng, Xiao-Yi; Fazakerley, Daniel J; James, David E

    2018-01-01

    Research into cellular metabolism has become more high-throughput, with typical cell-culture experiments being performed in multiwell plates (microplates). This format presents a challenge when trying to collect gaseous products, such as carbon dioxide (CO2), which requires a sealed environment and a vessel separate from the biological sample. To address this limitation, we developed a gas trapping protocol using perforated plastic lids in sealed cell-culture multiwell plates. We used this trap design to measure CO2 production from glucose and fatty acid metabolism, as well as hydrogen sulfide production from cysteine-treated cells. Our data clearly show that this gas trap can be applied to liquid and solid gas-collection media and can be used to study gaseous product generation by both adherent cells and cells in suspension. Since our gas traps can be adapted to multiwell plates of various sizes, they present a convenient, cost-effective solution that can accommodate the trend toward high-throughput measurements in metabolic research.

  2. High-throughput technology for novel SO2 oxidation catalysts

    International Nuclear Information System (INIS)

    Loskyll, Jonas; Stoewe, Klaus; Maier, Wilhelm F

    2011-01-01

    We review the state of the art and explain the need for better SO 2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO 2 to SO 3 . High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO 2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT) can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis) spectrometry was selected instead as a reliable analysis method of monitoring the SO 2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO 3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations. (topical review)

  3. Salinity tolerance loci revealed in rice using high-throughput non-invasive phenotyping

    KAUST Repository

    Al-Tamimi, Nadia Ali

    2016-11-17

    High-throughput phenotyping produces multiple measurements over time, which require new methods of analyses that are flexible in their quantification of plant growth and transpiration, yet are computationally economic. Here we develop such analyses and apply this to a rice population genotyped with a 700k SNP high-density array. Two rice diversity panels, indica and aus, containing a total of 553 genotypes, are phenotyped in waterlogged conditions. Using cubic smoothing splines to estimate plant growth and transpiration, we identify four time intervals that characterize the early responses of rice to salinity. Relative growth rate, transpiration rate and transpiration use efficiency (TUE) are analysed using a new association model that takes into account the interaction between treatment (control and salt) and genetic marker. This model allows the identification of previously undetected loci affecting TUE on chromosome 11, providing insights into the early responses of rice to salinity, in particular into the effects of salinity on plant growth and transpiration.

  4. Salinity tolerance loci revealed in rice using high-throughput non-invasive phenotyping

    KAUST Repository

    Al-Tamimi, Nadia Ali; Brien, Chris; Oakey, Helena; Berger, Bettina; Saade, Stephanie; Ho, Yung Shwen; Schmö ckel, Sandra M.; Tester, Mark A.; Negrã o, Só nia

    2016-01-01

    High-throughput phenotyping produces multiple measurements over time, which require new methods of analyses that are flexible in their quantification of plant growth and transpiration, yet are computationally economic. Here we develop such analyses and apply this to a rice population genotyped with a 700k SNP high-density array. Two rice diversity panels, indica and aus, containing a total of 553 genotypes, are phenotyped in waterlogged conditions. Using cubic smoothing splines to estimate plant growth and transpiration, we identify four time intervals that characterize the early responses of rice to salinity. Relative growth rate, transpiration rate and transpiration use efficiency (TUE) are analysed using a new association model that takes into account the interaction between treatment (control and salt) and genetic marker. This model allows the identification of previously undetected loci affecting TUE on chromosome 11, providing insights into the early responses of rice to salinity, in particular into the effects of salinity on plant growth and transpiration.

  5. High throughput nanoimprint lithography for semiconductor memory applications

    Science.gov (United States)

    Ye, Zhengmao; Zhang, Wei; Khusnatdinov, Niyaz; Stachowiak, Tim; Irving, J. W.; Longsine, Whitney; Traub, Matthew; Fletcher, Brian; Liu, Weijun

    2017-03-01

    Imprint lithography is a promising technology for replication of nano-scale features. For semiconductor device applications, Canon deposits a low viscosity resist on a field by field basis using jetting technology. A patterned mask is lowered into the resist fluid which then quickly flows into the relief patterns in the mask by capillary action. Following this filling step, the resist is crosslinked under UV radiation, and then the mask is removed, leaving a patterned resist on the substrate. There are two critical components to meeting throughput requirements for imprint lithography. Using a similar approach to what is already done for many deposition and etch processes, imprint stations can be clustered to enhance throughput. The FPA-1200NZ2C is a four station cluster system designed for high volume manufacturing. For a single station, throughput includes overhead, resist dispense, resist fill time (or spread time), exposure and separation. Resist exposure time and mask/wafer separation are well understood processing steps with typical durations on the order of 0.10 to 0.20 seconds. To achieve a total process throughput of 17 wafers per hour (wph) for a single station, it is necessary to complete the fluid fill step in 1.2 seconds. For a throughput of 20 wph, fill time must be reduced to only one 1.1 seconds. There are several parameters that can impact resist filling. Key parameters include resist drop volume (smaller is better), system controls (which address drop spreading after jetting), Design for Imprint or DFI (to accelerate drop spreading) and material engineering (to promote wetting between the resist and underlying adhesion layer). In addition, it is mandatory to maintain fast filling, even for edge field imprinting. In this paper, we address the improvements made in all of these parameters to first enable a 1.20 second filling process for a device like pattern and have demonstrated this capability for both full fields and edge fields. Non

  6. Combining RNA-seq and proteomic profiling to identify seminal fluid proteins in the migratory grasshopper Melanoplus sanguinipes (F).

    Science.gov (United States)

    Bonilla, Martha L; Todd, Christopher; Erlandson, Martin; Andres, Jose

    2015-12-22

    Seminal fluid proteins control many aspects of fertilization and in turn, they play a key role in post-mating sexual selection and possibly reproductive isolation. Because effective proteome profiling relies on the availability of high-quality DNA reference databases, our knowledge of these proteins is still largely limited to model organisms with ample genetic resources. New advances in sequencing technology allow for the rapid characterization of transcriptomes at low cost. By combining high throughput RNA-seq and shotgun proteomic profiling, we have characterized the seminal fluid proteins secreted by the primary male accessory gland of the migratory grasshopper (Melanoplus sanguinipes), one of the main agricultural pests in central North America. Using RNA sequencing, we characterized the transcripts of ~ 8,100 genes expressed in the long hyaline tubules (LHT) of the accessory glands. Proteomic profiling identified 353 proteins expressed in the long hyaline tubules (LHT). Of special interest are seminal fluid proteins (SFPs), such as EJAC-SP, ACE and prostaglandin synthetases, which are known to regulate female oviposition in insects. Our study provides new insights into the proteomic components of male ejaculate in Orthopterans, and highlights several important patterns. First, the presence of proteins that lack predicted classical secretory tags in accessory gland proteomes is common in male accessory glands. Second, the products of a few highly expressed genes dominate the accessory gland secretions. Third, accessory gland transcriptomes are enriched for novel transcripts. Fourth, there is conservation of SFPs' functional classes across distantly related taxonomic groups with very different life histories, mating systems and sperm transferring mechanisms. The identified SFPs may serve as targets of future efforts to develop species- specific genetic control strategies.

  7. High-Throughput Molecular Simulations of Metal Organic Frameworks for CO2 Separation: Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Ilknur Erucar

    2018-02-01

    Full Text Available Metal organic frameworks (MOFs have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure–performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.

  8. High-throughput screening of carbohydrate-degrading enzymes using novel insoluble chromogenic substrate assay kits

    DEFF Research Database (Denmark)

    Schückel, Julia; Kracun, Stjepan Kresimir; Willats, William George Tycho

    2016-01-01

    for this is that advances in genome and transcriptome sequencing, together with associated bioinformatics tools allow for rapid identification of candidate CAZymes, but technology for determining an enzyme's biochemical characteristics has advanced more slowly. To address this technology gap, a novel high-throughput assay...... CPH and ICB substrates are provided in a 96-well high-throughput assay system. The CPH substrates can be made in four different colors, enabling them to be mixed together and thus increasing assay throughput. The protocol describes a 96-well plate assay and illustrates how this assay can be used...... for screening the activities of enzymes, enzyme cocktails, and broths....

  9. Integrated Automation of High-Throughput Screening and Reverse Phase Protein Array Sample Preparation

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    into automated robotic high-throughput screens, which allows subsequent protein quantification. In this integrated solution, samples are directly forwarded to automated cell lysate preparation and preparation of dilution series, including reformatting to a protein spotter-compatible format after the high......-throughput screening. Tracking of huge sample numbers and data analysis from a high-content screen to RPPAs is accomplished via MIRACLE, a custom made software suite developed by us. To this end, we demonstrate that the RPPAs generated in this manner deliver reliable protein readouts and that GAPDH and TFR levels can...

  10. The Mysterious Unfoldome: Structureless, Underappreciated, Yet Vital Part of Any Given Proteome

    Directory of Open Access Journals (Sweden)

    Vladimir N. Uversky

    2010-01-01

    Full Text Available Contrarily to the general believe, many biologically active proteins lack stable tertiary and/or secondary structure under physiological conditions in vitro. These intrinsically disordered proteins (IDPs are highly abundant in nature and many of them are associated with various human diseases. The functional repertoire of IDPs complements the functions of ordered proteins. Since IDPs constitute a significant portion of any given proteome, they can be combined in an unfoldome; which is a portion of the proteome including all IDPs (also known as natively unfolded proteins, therefore, unfoldome, and describing their functions, structures, interactions, evolution, and so forth. Amino acid sequence and compositions of IDPs are very different from those of ordered proteins, making possible reliable identification of IDPs at the proteome level by various computational means. Furthermore, IDPs possess a number of unique structural properties and are characterized by a peculiar conformational behavior, including their high stability against low pH and high temperature and their structural indifference toward the unfolding by strong denaturants. These peculiarities were shown to be useful for elaboration of the experimental techniques for the large-scale identification of IDPs in various organisms. Some of the computational and experimental tools for the unfoldome discovery are discussed in this review.

  11. High-throughput screening of filamentous fungi using nanoliter-range droplet-based microfluidics

    Science.gov (United States)

    Beneyton, Thomas; Wijaya, I. Putu Mahendra; Postros, Prexilia; Najah, Majdi; Leblond, Pascal; Couvent, Angélique; Mayot, Estelle; Griffiths, Andrew D.; Drevelle, Antoine

    2016-06-01

    Filamentous fungi are an extremely important source of industrial enzymes because of their capacity to secrete large quantities of proteins. Currently, functional screening of fungi is associated with low throughput and high costs, which severely limits the discovery of novel enzymatic activities and better production strains. Here, we describe a nanoliter-range droplet-based microfluidic system specially adapted for the high-throughput sceening (HTS) of large filamentous fungi libraries for secreted enzyme activities. The platform allowed (i) compartmentalization of single spores in ~10 nl droplets, (ii) germination and mycelium growth and (iii) high-throughput sorting of fungi based on enzymatic activity. A 104 clone UV-mutated library of Aspergillus niger was screened based on α-amylase activity in just 90 minutes. Active clones were enriched 196-fold after a single round of microfluidic HTS. The platform is a powerful tool for the development of new production strains with low cost, space and time footprint and should bring enormous benefit for improving the viability of biotechnological processes.

  12. The application of the high throughput sequencing technology in the transposable elements.

    Science.gov (United States)

    Liu, Zhen; Xu, Jian-hong

    2015-09-01

    High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.

  13. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  14. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  15. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types.

    Directory of Open Access Journals (Sweden)

    Aaron T L Lun

    2018-05-01

    Full Text Available Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.

  16. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types.

    Science.gov (United States)

    Lun, Aaron T L; Pagès, Hervé; Smith, Mike L

    2018-05-01

    Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.

  17. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types

    Science.gov (United States)

    Pagès, Hervé

    2018-01-01

    Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set. PMID:29723188

  18. A community proposal to integrate proteomics activities in ELIXIR [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Juan Antonio Vizcaíno

    2017-06-01

    Full Text Available Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on ‘The Future of Proteomics in ELIXIR’ that took place in March 2017 in Tübingen (Germany, and involved representatives of eleven ELIXIR nodes.   These discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR’s existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper.

  19. High-Throughput Cancer Cell Sphere Formation for 3D Cell Culture.

    Science.gov (United States)

    Chen, Yu-Chih; Yoon, Euisik

    2017-01-01

    Three-dimensional (3D) cell culture is critical in studying cancer pathology and drug response. Though 3D cancer sphere culture can be performed in low-adherent dishes or well plates, the unregulated cell aggregation may skew the results. On contrary, microfluidic 3D culture can allow precise control of cell microenvironments, and provide higher throughput by orders of magnitude. In this chapter, we will look into engineering innovations in a microfluidic platform for high-throughput cancer cell sphere formation and review the implementation methods in detail.

  20. A high-throughput in vitro ring assay for vasoactivity using magnetic 3D bioprinting

    Science.gov (United States)

    Tseng, Hubert; Gage, Jacob A.; Haisler, William L.; Neeley, Shane K.; Shen, Tsaiwei; Hebel, Chris; Barthlow, Herbert G.; Wagoner, Matthew; Souza, Glauco R.

    2016-01-01

    Vasoactive liabilities are typically assayed using wire myography, which is limited by its high cost and low throughput. To meet the demand for higher throughput in vitro alternatives, this study introduces a magnetic 3D bioprinting-based vasoactivity assay. The principle behind this assay is the magnetic printing of vascular smooth muscle cells into 3D rings that functionally represent blood vessel segments, whose contraction can be altered by vasodilators and vasoconstrictors. A cost-effective imaging modality employing a mobile device is used to capture contraction with high throughput. The goal of this study was to validate ring contraction as a measure of vasoactivity, using a small panel of known vasoactive drugs. In vitro responses of the rings matched outcomes predicted by in vivo pharmacology, and were supported by immunohistochemistry. Altogether, this ring assay robustly models vasoactivity, which could meet the need for higher throughput in vitro alternatives. PMID:27477945

  1. High-throughput shotgun lipidomics by quadrupole time-of-flight mass spectrometry

    DEFF Research Database (Denmark)

    Ståhlman, Marcus; Ejsing, Christer S.; Tarasov, Kirill

    2009-01-01

    Technological advances in mass spectrometry and meticulous method development have produced several shotgun lipidomic approaches capable of characterizing lipid species by direct analysis of total lipid extracts. Shotgun lipidomics by hybrid quadrupole time-of-flight mass spectrometry allows...... the absolute quantification of hundreds of molecular glycerophospholipid species, glycerolipid species, sphingolipid species and sterol lipids. Future applications in clinical cohort studies demand detailed lipid molecule information and the application of high-throughput lipidomics platforms. In this review...... we describe a novel high-throughput shotgun lipidomic platform based on 96-well robot-assisted lipid extraction, automated sample infusion by mircofluidic-based nanoelectrospray ionization, and quantitative multiple precursor ion scanning analysis on a quadrupole time-of-flight mass spectrometer...

  2. Infra-red thermography for high throughput field phenotyping in Solanum tuberosum.

    Directory of Open Access Journals (Sweden)

    Ankush Prashar

    Full Text Available The rapid development of genomic technology has made high throughput genotyping widely accessible but the associated high throughput phenotyping is now the major limiting factor in genetic analysis of traits. This paper evaluates the use of thermal imaging for the high throughput field phenotyping of Solanum tuberosum for differences in stomatal behaviour. A large multi-replicated trial of a potato mapping population was used to investigate the consistency in genotypic rankings across different trials and across measurements made at different times of day and on different days. The results confirmed a high degree of consistency between the genotypic rankings based on relative canopy temperature on different occasions. Genotype discrimination was enhanced both through normalising data by expressing genotype temperatures as differences from image means and through the enhanced replication obtained by using overlapping images. A Monte Carlo simulation approach was used to confirm the magnitude of genotypic differences that it is possible to discriminate. The results showed a clear negative association between canopy temperature and final tuber yield for this population, when grown under ample moisture supply. We have therefore established infrared thermography as an easy, rapid and non-destructive screening method for evaluating large population trials for genetic analysis. We also envisage this approach as having great potential for evaluating plant response to stress under field conditions.

  3. High-throughput purification of recombinant proteins using self-cleaving intein tags.

    Science.gov (United States)

    Coolbaugh, M J; Shakalli Tang, M J; Wood, D W

    2017-01-01

    High throughput methods for recombinant protein production using E. coli typically involve the use of affinity tags for simple purification of the protein of interest. One drawback of these techniques is the occasional need for tag removal before study, which can be hard to predict. In this work, we demonstrate two high throughput purification methods for untagged protein targets based on simple and cost-effective self-cleaving intein tags. Two model proteins, E. coli beta-galactosidase (βGal) and superfolder green fluorescent protein (sfGFP), were purified using self-cleaving versions of the conventional chitin-binding domain (CBD) affinity tag and the nonchromatographic elastin-like-polypeptide (ELP) precipitation tag in a 96-well filter plate format. Initial tests with shake flask cultures confirmed that the intein purification scheme could be scaled down, with >90% pure product generated in a single step using both methods. The scheme was then validated in a high throughput expression platform using 24-well plate cultures followed by purification in 96-well plates. For both tags and with both target proteins, the purified product was consistently obtained in a single-step, with low well-to-well and plate-to-plate variability. This simple method thus allows the reproducible production of highly pure untagged recombinant proteins in a convenient microtiter plate format. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The potato tuber mitochondrial proteome

    DEFF Research Database (Denmark)

    Salvato, Fernanda; Havelund, Jesper Foged; Chen, Mingjie

    2014-01-01

    Mitochondria are called the powerhouses of the cell. To better understand the role of mitochondria in maintaining and regulating metabolism in storage tissues, highly purified mitochondria were isolated from dormant potato tubers (Solanum tuberosum 'Folva') and their proteome investigated. Proteins...... manner using normalized spectral counts including as many as 5-fold more "extreme" proteins (low mass, high isoelectric point, hydrophobic) than previous mitochondrial proteome studies. We estimate that this compendium of proteins represents a high coverage of the potato tuber mitochondrial proteome...

  5. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    Science.gov (United States)

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce

  6. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter

  7. Alignment of high-throughput sequencing data inside in-memory databases.

    Science.gov (United States)

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  8. Free Flow Zonal Electrophoresis for Fractionation of Plant Membrane Compartments Prior to Proteomic Analysis.

    Science.gov (United States)

    Barkla, Bronwyn J

    2018-01-01

    Free flow zonal electrophoresis (FFZE) is a versatile, reproducible, and potentially high-throughput technique for the separation of plant organelles and membranes by differences in membrane surface charge. It offers considerable benefits over traditional fractionation techniques, such as density gradient centrifugation and two-phase partitioning, as it is relatively fast, sample recovery is high, and the method provides unparalleled sample purity. It has been used to successfully purify chloroplasts and mitochondria from plants but also, to obtain highly pure fractions of plasma membrane, tonoplast, ER, Golgi, and thylakoid membranes. Application of the technique can significantly improve protein coverage in large-scale proteomics studies by decreasing sample complexity. Here, we describe the method for the fractionation of plant cellular membranes from leaves by FFZE.

  9. Simultaneous measurements of auto-immune and infectious disease specific antibodies using a high throughput multiplexing tool.

    Directory of Open Access Journals (Sweden)

    Atul Asati

    Full Text Available Considering importance of ganglioside antibodies as biomarkers in various immune-mediated neuropathies and neurological disorders, we developed a high throughput multiplexing tool for the assessment of gangliosides-specific antibodies based on Biolpex/Luminex platform. In this report, we demonstrate that the ganglioside high throughput multiplexing tool is robust, highly specific and demonstrating ∼100-fold higher concentration sensitivity for IgG detection than ELISA. In addition to the ganglioside-coated array, the high throughput multiplexing tool contains beads coated with influenza hemagglutinins derived from H1N1 A/Brisbane/59/07 and H1N1 A/California/07/09 strains. Influenza beads provided an added advantage of simultaneous detection of ganglioside- and influenza-specific antibodies, a capacity important for the assay of both infectious antigen-specific and autoimmune antibodies following vaccination or disease. Taken together, these results support the potential adoption of the ganglioside high throughput multiplexing tool for measuring ganglioside antibodies in various neuropathic and neurological disorders.

  10. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  11. Modular high-throughput test stand for versatile screening of thin-film materials libraries

    International Nuclear Information System (INIS)

    Thienhaus, Sigurd; Hamann, Sven; Ludwig, Alfred

    2011-01-01

    Versatile high-throughput characterization tools are required for the development of new materials using combinatorial techniques. Here, we describe a modular, high-throughput test stand for the screening of thin-film materials libraries, which can carry out automated electrical, magnetic and magnetoresistance measurements in the temperature range of −40 to 300 °C. As a proof of concept, we measured the temperature-dependent resistance of Fe–Pd–Mn ferromagnetic shape-memory alloy materials libraries, revealing reversible martensitic transformations and the associated transformation temperatures. Magneto-optical screening measurements of a materials library identify ferromagnetic samples, whereas resistivity maps support the discovery of new phases. A distance sensor in the same setup allows stress measurements in materials libraries deposited on cantilever arrays. A combination of these methods offers a fast and reliable high-throughput characterization technology for searching for new materials. Using this approach, a composition region has been identified in the Fe–Pd–Mn system that combines ferromagnetism and martensitic transformation.

  12. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  13. A draft map of the human ovarian proteome for tissue engineering and clinical applications.

    Science.gov (United States)

    Ouni, Emna; Vertommen, Didier; Chiti, Maria Costanza; Dolmans, Marie-Madeleine; Amorim, Christiani Andrade

    2018-02-23

    Fertility preservation research in women today is increasingly taking advantage of bioengineering techniques to develop new biomimetic materials and solutions to safeguard ovarian cell function and microenvironment in vitro and in vivo. However, available data on the human ovary are limited and fundamental differences between animal models and humans are hampering researchers in their quest for more extensive knowledge of human ovarian physiology and key reproductive proteins that need to be preserved. We therefore turned to multi-dimensional label-free mass spectrometry to analyze human ovarian cortex, as it is a high-throughput and conclusive technique providing information on the proteomic composition of complex tissues like the ovary. In-depth proteomic profiling through two-dimensional liquid chromatography-mass spectrometry, western blot, histological and immunohistochemical analyses, and data mining helped us to confidently identify 1,508 proteins. Moreover, our method allowed us to chart the most complete representation so far of the ovarian matrisome, defined as the ensemble of extracellular matrix proteins and associated factors, including more than 80 proteins. In conclusion, this study will provide a better understanding of ovarian proteomics, with a detailed characterization of the ovarian follicle microenvironment, in order to enable bioengineers to create biomimetic scaffolds for transplantation and three-dimensional in vitro culture. By publishing our proteomic data, we also hope to contribute to accelerating biomedical research into ovarian health and disease in general. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  14. Spectrophotometric Enzyme Assays for High-Throughput Screening

    Directory of Open Access Journals (Sweden)

    Jean-Louis Reymond

    2004-01-01

    Full Text Available This paper reviews high-throughput screening enzyme assays developed in our laboratory over the last ten years. These enzyme assays were initially developed for the purpose of discovering catalytic antibodies by screening cell culture supernatants, but have proved generally useful for testing enzyme activities. Examples include TLC-based screening using acridone-labeled substrates, fluorogenic assays based on the β-elimination of umbelliferone or nitrophenol, and indirect assays such as the back-titration method with adrenaline and the copper-calcein fluorescence assay for aminoacids.

  15. Proteomic profiling during the pre-competent to competent transition of the biofouling polychaete Hydroides elegans

    KAUST Repository

    Zhang, Yu

    2014-08-22

    The polychaete, Hydroides elegans, is a tube-building worm that is widely distributed in tropical and subtropical seas. It is a dominant fouling species and thus a major target organism in antifouling research. Here, the first high-throughput proteomic profiling of pre-competent and competent larvae of H. elegans is reported with the identification of 1,519 and 1,322 proteins, respectively. These proteins were associated with a variety of biological processes. However, a large proportion was involved in energy metabolism, redox homeostasis, and microtubule-based processes. A comparative analysis revealed 21 proteins that were differentially regulated in larvae approaching competency.

  16. Proteomic profiling during the pre-competent to competent transition of the biofouling polychaete Hydroides elegans

    KAUST Repository

    Zhang, Yu; Sun, Jin; Zhang, Huoming; Chandramouli, Kondethimmanahalli; Xu, Ying; He, Lisheng; Ravasi, Timothy; Qian, Peiyuan

    2014-01-01

    The polychaete, Hydroides elegans, is a tube-building worm that is widely distributed in tropical and subtropical seas. It is a dominant fouling species and thus a major target organism in antifouling research. Here, the first high-throughput proteomic profiling of pre-competent and competent larvae of H. elegans is reported with the identification of 1,519 and 1,322 proteins, respectively. These proteins were associated with a variety of biological processes. However, a large proportion was involved in energy metabolism, redox homeostasis, and microtubule-based processes. A comparative analysis revealed 21 proteins that were differentially regulated in larvae approaching competency.

  17. Proteomic profiling during the pre-competent to competent transition of the biofouling polychaete Hydroides elegans.

    Science.gov (United States)

    Zhang, Yu; Sun, Jin; Zhang, Huoming; Chandramouli, Kondethimmanahalli H; Xu, Ying; He, Li-Sheng; Ravasi, Timothy; Qian, Pei-Yuan

    2014-09-01

    The polychaete, Hydroides elegans, is a tube-building worm that is widely distributed in tropical and subtropical seas. It is a dominant fouling species and thus a major target organism in antifouling research. Here, the first high-throughput proteomic profiling of pre-competent and competent larvae of H. elegans is reported with the identification of 1,519 and 1,322 proteins, respectively. These proteins were associated with a variety of biological processes. However, a large proportion was involved in energy metabolism, redox homeostasis, and microtubule-based processes. A comparative analysis revealed 21 proteins that were differentially regulated in larvae approaching competency.

  18. Green throughput taxation

    International Nuclear Information System (INIS)

    Bruvoll, A.; Ibenholt, K.

    1998-01-01

    According to optimal taxation theory, raw materials should be taxed to capture the embedded scarcity rent in their value. To reduce both natural resource use and the corresponding emissions, or the throughput in the economic system, the best policy may be a tax on material inputs. As a first approach to throughput taxation, this paper considers a tax on intermediates in the framework of a dynamic computable general equilibrium model with environmental feedbacks. To balance the budget, payroll taxes are reduced. As a result, welfare indicators as material consumption and leisure time consumption are reduced, while on the other hand all the environmental indicators improve. 27 refs

  19. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

    Energy Technology Data Exchange (ETDEWEB)

    Langston, Michael A. [Univ. of Tennessee, Knoxville, TN (United States)

    2012-09-06

    This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism's responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

  20. High-resolution and high-throughput multichannel Fourier transform spectrometer with two-dimensional interferogram warping compensation

    Science.gov (United States)

    Watanabe, A.; Furukawa, H.

    2018-04-01

    The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.

  1. A pocket device for high-throughput optofluidic holographic microscopy

    Science.gov (United States)

    Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.

    2017-06-01

    Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.

  2. In-field High Throughput Phenotyping and Cotton Plant Growth Analysis Using LiDAR.

    Science.gov (United States)

    Sun, Shangpeng; Li, Changying; Paterson, Andrew H; Jiang, Yu; Xu, Rui; Robertson, Jon S; Snider, John L; Chee, Peng W

    2018-01-01

    Plant breeding programs and a wide range of plant science applications would greatly benefit from the development of in-field high throughput phenotyping technologies. In this study, a terrestrial LiDAR-based high throughput phenotyping system was developed. A 2D LiDAR was applied to scan plants from overhead in the field, and an RTK-GPS was used to provide spatial coordinates. Precise 3D models of scanned plants were reconstructed based on the LiDAR and RTK-GPS data. The ground plane of the 3D model was separated by RANSAC algorithm and a Euclidean clustering algorithm was applied to remove noise generated by weeds. After that, clean 3D surface models of cotton plants were obtained, from which three plot-level morphologic traits including canopy height, projected canopy area, and plant volume were derived. Canopy height ranging from 85th percentile to the maximum height were computed based on the histogram of the z coordinate for all measured points; projected canopy area was derived by projecting all points on a ground plane; and a Trapezoidal rule based algorithm was proposed to estimate plant volume. Results of validation experiments showed good agreement between LiDAR measurements and manual measurements for maximum canopy height, projected canopy area, and plant volume, with R 2 -values of 0.97, 0.97, and 0.98, respectively. The developed system was used to scan the whole field repeatedly over the period from 43 to 109 days after planting. Growth trends and growth rate curves for all three derived morphologic traits were established over the monitoring period for each cultivar. Overall, four different cultivars showed similar growth trends and growth rate patterns. Each cultivar continued to grow until ~88 days after planting, and from then on varied little. However, the actual values were cultivar specific. Correlation analysis between morphologic traits and final yield was conducted over the monitoring period. When considering each cultivar individually

  3. A comparison of high-throughput techniques for assaying circadian rhythms in plants.

    Science.gov (United States)

    Tindall, Andrew J; Waller, Jade; Greenwood, Mark; Gould, Peter D; Hartwell, James; Hall, Anthony

    2015-01-01

    Over the last two decades, the development of high-throughput techniques has enabled us to probe the plant circadian clock, a key coordinator of vital biological processes, in ways previously impossible. With the circadian clock increasingly implicated in key fitness and signalling pathways, this has opened up new avenues for understanding plant development and signalling. Our tool-kit has been constantly improving through continual development and novel techniques that increase throughput, reduce costs and allow higher resolution on the cellular and subcellular levels. With circadian assays becoming more accessible and relevant than ever to researchers, in this paper we offer a review of the techniques currently available before considering the horizons in circadian investigation at ever higher throughputs and resolutions.

  4. Multicapillary SDS-gel electrophoresis for the analysis of fluorescently labeled mAb preparations: a high throughput quality control process for the production of QuantiPlasma and PlasmaScan mAb libraries.

    Science.gov (United States)

    Székely, Andrea; Szekrényes, Akos; Kerékgyártó, Márta; Balogh, Attila; Kádas, János; Lázár, József; Guttman, András; Kurucz, István; Takács, László

    2014-08-01

    Molecular heterogeneity of mAb preparations is the result of various co- and post-translational modifications and to contaminants related to the production process. Changes in molecular composition results in alterations of functional performance, therefore quality control and validation of therapeutic or diagnostic protein products is essential. A special case is the consistent production of mAb libraries (QuantiPlasma™ and PlasmaScan™) for proteome profiling, quality control of which represents a challenge because of high number of mAbs (>1000). Here, we devise a generally applicable multicapillary SDS-gel electrophoresis process for the analysis of fluorescently labeled mAb preparations for the high throughput quality control of mAbs of the QuantiPlasma™ and PlasmaScan™ libraries. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Centroid based clustering of high throughput sequencing reads based on n-mer counts.

    Science.gov (United States)

    Solovyov, Alexander; Lipkin, W Ian

    2013-09-08

    Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering. We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster. We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.

  6. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  7. The JCSG high-throughput structural biology pipeline

    International Nuclear Information System (INIS)

    Elsliger, Marc-André; Deacon, Ashley M.; Godzik, Adam; Lesley, Scott A.; Wooley, John; Wüthrich, Kurt; Wilson, Ian A.

    2010-01-01

    The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years and has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe. The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications

  8. High-throughput selection for cellulase catalysts using chemical complementation.

    Science.gov (United States)

    Peralta-Yahya, Pamela; Carter, Brian T; Lin, Hening; Tao, Haiyan; Cornish, Virginia W

    2008-12-24

    Efficient enzymatic hydrolysis of lignocellulosic material remains one of the major bottlenecks to cost-effective conversion of biomass to ethanol. Improvement of glycosylhydrolases, however, is limited by existing medium-throughput screening technologies. Here, we report the first high-throughput selection for cellulase catalysts. This selection was developed by adapting chemical complementation to provide a growth assay for bond cleavage reactions. First, a URA3 counter selection was adapted to link chemical dimerizer activated gene transcription to cell death. Next, the URA3 counter selection was shown to detect cellulase activity based on cleavage of a tetrasaccharide chemical dimerizer substrate and decrease in expression of the toxic URA3 reporter. Finally, the utility of the cellulase selection was assessed by isolating cellulases with improved activity from a cellulase library created by family DNA shuffling. This application provides further evidence that chemical complementation can be readily adapted to detect different enzymatic activities for important chemical transformations for which no natural selection exists. Because of the large number of enzyme variants that selections can now test as compared to existing medium-throughput screens for cellulases, this assay has the potential to impact the discovery of improved cellulases and other glycosylhydrolases for biomass conversion from libraries of cellulases created by mutagenesis or obtained from natural biodiversity.

  9. Blood group genotyping: from patient to high-throughput donor screening.

    Science.gov (United States)

    Veldhuisen, B; van der Schoot, C E; de Haas, M

    2009-10-01

    Blood group antigens, present on the cell membrane of red blood cells and platelets, can be defined either serologically or predicted based on the genotypes of genes encoding for blood group antigens. At present, the molecular basis of many antigens of the 30 blood group systems and 17 human platelet antigens is known. In many laboratories, blood group genotyping assays are routinely used for diagnostics in cases where patient red cells cannot be used for serological typing due to the presence of auto-antibodies or after recent transfusions. In addition, DNA genotyping is used to support (un)-expected serological findings. Fetal genotyping is routinely performed when there is a risk of alloimmune-mediated red cell or platelet destruction. In case of patient blood group antigen typing, it is important that a genotyping result is quickly available to support the selection of donor blood, and high-throughput of the genotyping method is not a prerequisite. In addition, genotyping of blood donors will be extremely useful to obtain donor blood with rare phenotypes, for example lacking a high-frequency antigen, and to obtain a fully typed donor database to be used for a better matching between recipient and donor to prevent adverse transfusion reactions. Serological typing of large cohorts of donors is a labour-intensive and expensive exercise and hampered by the lack of sufficient amounts of approved typing reagents for all blood group systems of interest. Currently, high-throughput genotyping based on DNA micro-arrays is a very feasible method to obtain a large pool of well-typed blood donors. Several systems for high-throughput blood group genotyping are developed and will be discussed in this review.

  10. High-throughput assessment of context-dependent effects of chromatin proteins

    NARCIS (Netherlands)

    Brueckner, L. (Laura); Van Arensbergen, J. (Joris); Akhtar, W. (Waseem); L. Pagie (Ludo); B. van Steensel (Bas)

    2016-01-01

    textabstractBackground: Chromatin proteins control gene activity in a concerted manner. We developed a high-throughput assay to study the effects of the local chromatin environment on the regulatory activity of a protein of interest. The assay combines a previously reported multiplexing strategy

  11. tcpl: The ToxCast Pipeline for High-Throughput Screening Data

    Science.gov (United States)

    Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...

  12. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  13. Automated Sample Preparation Platform for Mass Spectrometry-Based Plasma Proteomics and Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    Vilém Guryča

    2014-03-01

    Full Text Available The identification of novel biomarkers from human plasma remains a critical need in order to develop and monitor drug therapies for nearly all disease areas. The discovery of novel plasma biomarkers is, however, significantly hampered by the complexity and dynamic range of proteins within plasma, as well as the inherent variability in composition from patient to patient. In addition, it is widely accepted that most soluble plasma biomarkers for diseases such as cancer will be represented by tissue leakage products, circulating in plasma at low levels. It is therefore necessary to find approaches with the prerequisite level of sensitivity in such a complex biological matrix. Strategies for fractionating the plasma proteome have been suggested, but improvements in sensitivity are often negated by the resultant process variability. Here we describe an approach using multidimensional chromatography and on-line protein derivatization, which allows for higher sensitivity, whilst minimizing the process variability. In order to evaluate this automated process fully, we demonstrate three levels of processing and compare sensitivity, throughput and reproducibility. We demonstrate that high sensitivity analysis of the human plasma proteome is possible down to the low ng/mL or even high pg/mL level with a high degree of technical reproducibility.

  14. The Urine Proteome as a Biomarker of Radiation Injury

    Science.gov (United States)

    Sharma, Mukut; Halligan, Brian D.; Wakim, Bassam T.; Savin, Virginia J.; Cohen, Eric P.; Moulder, John E.

    2009-01-01

    Terrorist attacks or nuclear accidents could expose large numbers of people to ionizing radiation, and early biomarkers of radiation injury would be critical for triage, treatment and follow-up of such individuals. However, no such biomarkers have yet been proven to exist. We tested the potential of high throughput proteomics to identify protein biomarkers of radiation injury after total body X-ray irradiation in a rat model. Subtle functional changes in the kidney are suggested by an increased glomerular permeability for macromolecules measured within 24 hours after TBI. Ultrastructural changes in glomerular podocytes include partial loss of the interdigitating organization of foot processes. Analysis of urine by LC-MS/MS and 2D-GE showed significant changes in the urine proteome within 24 hours after TBI. Tissue kallikrein 1-related peptidase, cysteine proteinase inhibitor cystatin C and oxidized histidine were found to be increased while a number of proteinase inhibitors including kallikrein-binding protein and albumin were found to be decreased post-irradiation. Thus, TBI causes immediately detectable changes in renal structure and function and in the urinary protein profile. This suggests that both systemic and renal changes are induced by radiation and it may be possible to identify a set of biomarkers unique to radiation injury. PMID:19746194

  15. Combinatorial chemoenzymatic synthesis and high-throughput screening of sialosides.

    Science.gov (United States)

    Chokhawala, Harshal A; Huang, Shengshu; Lau, Kam; Yu, Hai; Cheng, Jiansong; Thon, Vireak; Hurtado-Ziola, Nancy; Guerrero, Juan A; Varki, Ajit; Chen, Xi

    2008-09-19

    Although the vital roles of structures containing sialic acid in biomolecular recognition are well documented, limited information is available on how sialic acid structural modifications, sialyl linkages, and the underlying glycan structures affect the binding or the activity of sialic acid-recognizing proteins and related downstream biological processes. A novel combinatorial chemoenzymatic method has been developed for the highly efficient synthesis of biotinylated sialosides containing different sialic acid structures and different underlying glycans in 96-well plates from biotinylated sialyltransferase acceptors and sialic acid precursors. By transferring the reaction mixtures to NeutrAvidin-coated plates and assaying for the yields of enzymatic reactions using lectins recognizing sialyltransferase acceptors but not the sialylated products, the biotinylated sialoside products can be directly used, without purification, for high-throughput screening to quickly identify the ligand specificity of sialic acid-binding proteins. For a proof-of-principle experiment, 72 biotinylated alpha2,6-linked sialosides were synthesized in 96-well plates from 4 biotinylated sialyltransferase acceptors and 18 sialic acid precursors using a one-pot three-enzyme system. High-throughput screening assays performed in NeutrAvidin-coated microtiter plates show that whereas Sambucus nigra Lectin binds to alpha2,6-linked sialosides with high promiscuity, human Siglec-2 (CD22) is highly selective for a number of sialic acid structures and the underlying glycans in its sialoside ligands.

  16. 40 CFR Table 3 to Subpart Eeee of... - Operating Limits-High Throughput Transfer Racks

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits-High Throughput Transfer Racks 3 Table 3 to Subpart EEEE of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION... Throughput Transfer Racks As stated in § 63.2346(e), you must comply with the operating limits for existing...

  17. Milk bottom-up proteomics: method optimisation.

    Directory of Open Access Journals (Sweden)

    Delphine eVincent

    2016-01-01

    Full Text Available Milk is a complex fluid whose proteome displays a diverse set of proteins of high abundance such as caseins and medium to low abundance whey proteins such as ß-lactoglobulin, lactoferrin, immunoglobulins, glycoproteins, peptide hormones and enzymes. A sample preparation method that enables high reproducibility and throughput is key in reliably identifying proteins present or proteins responding to conditions such as a diet, health or genetics. Using skim milk samples from Jersey and Holstein-Friesian cows, we compared three extraction procedures which have not previously been applied to samples of cows’ milk. Method A (urea involved a simple dilution of the milk in a urea-based buffer, method B (TCA/acetone involved a trichloroacetic acid (TCA/acetone precipitation and method C (methanol/chloroform involved a tri-phasic partition method in chloroform/methanol solution. Protein assays, SDS-PAGE profiling, and trypsin digestion followed by nanoHPLC-electrospray ionisation-tandem mass spectrometry (nLC-ESI-MS/MS analyses were performed to assess their efficiency. Replicates were used at each analytical step (extraction, digestion, injection to assess reproducibility. Mass spectrometry (MS data are available via ProteomeXchange with identifier PXD002529. Overall 186 unique accessions, major and minor proteins, were identified with a combination of methods. Method C (methanol/chloroform yielded the best resolved SDS-patterns and highest protein recovery rates, method A (urea yielded the greatest number of accessions, and, of the three procedures, method B (TCA/acetone was the least compatible of all with a wide range of downstream analytical procedures. Our results also highlighted breed differences between the proteins in milk of Jersey and Holstein-Friesian cows.

  18. Management of High-Throughput DNA Sequencing Projects: Alpheus.

    Science.gov (United States)

    Miller, Neil A; Kingsmore, Stephen F; Farmer, Andrew; Langley, Raymond J; Mudge, Joann; Crow, John A; Gonzalez, Alvaro J; Schilkey, Faye D; Kim, Ryan J; van Velkinburgh, Jennifer; May, Gregory D; Black, C Forrest; Myers, M Kathy; Utsey, John P; Frost, Nicholas S; Sugarbaker, David J; Bueno, Raphael; Gullans, Stephen R; Baxter, Susan M; Day, Steve W; Retzel, Ernest F

    2008-12-26

    High-throughput DNA sequencing has enabled systems biology to begin to address areas in health, agricultural and basic biological research. Concomitant with the opportunities is an absolute necessity to manage significant volumes of high-dimensional and inter-related data and analysis. Alpheus is an analysis pipeline, database and visualization software for use with massively parallel DNA sequencing technologies that feature multi-gigabase throughput characterized by relatively short reads, such as Illumina-Solexa (sequencing-by-synthesis), Roche-454 (pyrosequencing) and Applied Biosystem's SOLiD (sequencing-by-ligation). Alpheus enables alignment to reference sequence(s), detection of variants and enumeration of sequence abundance, including expression levels in transcriptome sequence. Alpheus is able to detect several types of variants, including non-synonymous and synonymous single nucleotide polymorphisms (SNPs), insertions/deletions (indels), premature stop codons, and splice isoforms. Variant detection is aided by the ability to filter variant calls based on consistency, expected allele frequency, sequence quality, coverage, and variant type in order to minimize false positives while maximizing the identification of true positives. Alpheus also enables comparisons of genes with variants between cases and controls or bulk segregant pools. Sequence-based differential expression comparisons can be developed, with data export to SAS JMP Genomics for statistical analysis.

  19. High-throughput screening with micro-x-ray fluorescence

    International Nuclear Information System (INIS)

    Havrilla, George J.; Miller, Thomasin C.

    2005-01-01

    Micro-x-ray fluorescence (MXRF) is a useful characterization tool for high-throughput screening of combinatorial libraries. Due to the increasing threat of use of chemical warfare (CW) agents both in military actions and against civilians by terrorist extremists, there is a strong push to improve existing methods and develop means for the detection of a broad spectrum of CW agents in a minimal amount of time to increase national security. This paper describes a combinatorial high-throughput screening technique for CW receptor discovery to aid in sensor development. MXRF can screen materials for elemental composition at the mesoscale level (tens to hundreds of micrometers). The key aspect of this work is the use of commercial MXRF instrumentation coupled with the inherent heteroatom elements within the target molecules of the combinatorial reaction to provide rapid and specific identification of lead species. The method is demonstrated by screening an 11-mer oligopeptide library for selective binding of the degradation products of the nerve agent VX. The identified oligopeptides can be used as selective molecular receptors for sensor development. The MXRF screening method is nondestructive, requires minimal sample preparation or special tags for analysis, and the screening time depends on the desired sensitivity

  20. Label-free cell-cycle analysis by high-throughput quantitative phase time-stretch imaging flow cytometry

    Science.gov (United States)

    Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2018-02-01

    Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.

  1. Finding Biomass Degrading Enzymes Through an Activity-Correlated Quantitative Proteomics Platform (ACPP)

    Science.gov (United States)

    Ma, Hongyan; Delafield, Daniel G.; Wang, Zhe; You, Jianlan; Wu, Si

    2017-04-01

    The microbial secretome, known as a pool of biomass (i.e., plant-based materials) degrading enzymes, can be utilized to discover industrial enzyme candidates for biofuel production. Proteomics approaches have been applied to discover novel enzyme candidates through comparing protein expression profiles with enzyme activity of the whole secretome under different growth conditions. However, the activity measurement of each enzyme candidate is needed for confident "active" enzyme assignments, which remains to be elucidated. To address this challenge, we have developed an Activity-Correlated Quantitative Proteomics Platform (ACPP) that systematically correlates protein-level enzymatic activity patterns and protein elution profiles using a label-free quantitative proteomics approach. The ACPP optimized a high performance anion exchange separation for efficiently fractionating complex protein samples while preserving enzymatic activities. The detected enzymatic activity patterns in sequential fractions using microplate-based assays were cross-correlated with protein elution profiles using a customized pattern-matching algorithm with a correlation R-score. The ACPP has been successfully applied to the identification of two types of "active" biomass-degrading enzymes (i.e., starch hydrolysis enzymes and cellulose hydrolysis enzymes) from Aspergillus niger secretome in a multiplexed fashion. By determining protein elution profiles of 156 proteins in A. niger secretome, we confidently identified the 1,4-α-glucosidase as the major "active" starch hydrolysis enzyme (R = 0.96) and the endoglucanase as the major "active" cellulose hydrolysis enzyme (R = 0.97). The results demonstrated that the ACPP facilitated the discovery of bioactive enzymes from complex protein samples in a high-throughput, multiplexing, and untargeted fashion.

  2. High-Throughput Next-Generation Sequencing of Polioviruses

    Science.gov (United States)

    Montmayeur, Anna M.; Schmidt, Alexander; Zhao, Kun; Magaña, Laura; Iber, Jane; Castro, Christina J.; Chen, Qi; Henderson, Elizabeth; Ramos, Edward; Shaw, Jing; Tatusov, Roman L.; Dybdahl-Sissoko, Naomi; Endegue-Zanga, Marie Claire; Adeniji, Johnson A.; Oberste, M. Steven; Burns, Cara C.

    2016-01-01

    ABSTRACT The poliovirus (PV) is currently targeted for worldwide eradication and containment. Sanger-based sequencing of the viral protein 1 (VP1) capsid region is currently the standard method for PV surveillance. However, the whole-genome sequence is sometimes needed for higher resolution global surveillance. In this study, we optimized whole-genome sequencing protocols for poliovirus isolates and FTA cards using next-generation sequencing (NGS), aiming for high sequence coverage, efficiency, and throughput. We found that DNase treatment of poliovirus RNA followed by random reverse transcription (RT), amplification, and the use of the Nextera XT DNA library preparation kit produced significantly better results than other preparations. The average viral reads per total reads, a measurement of efficiency, was as high as 84.2% ± 15.6%. PV genomes covering >99 to 100% of the reference length were obtained and validated with Sanger sequencing. A total of 52 PV genomes were generated, multiplexing as many as 64 samples in a single Illumina MiSeq run. This high-throughput, sequence-independent NGS approach facilitated the detection of a diverse range of PVs, especially for those in vaccine-derived polioviruses (VDPV), circulating VDPV, or immunodeficiency-related VDPV. In contrast to results from previous studies on other viruses, our results showed that filtration and nuclease treatment did not discernibly increase the sequencing efficiency of PV isolates. However, DNase treatment after nucleic acid extraction to remove host DNA significantly improved the sequencing results. This NGS method has been successfully implemented to generate PV genomes for molecular epidemiology of the most recent PV isolates. Additionally, the ability to obtain full PV genomes from FTA cards will aid in facilitating global poliovirus surveillance. PMID:27927929

  3. Tackling probiotic and gut microbiota functionality through proteomics.

    Science.gov (United States)

    Ruiz, Lorena; Hidalgo, Claudio; Blanco-Míguez, Aitor; Lourenço, Anália; Sánchez, Borja; Margolles, Abelardo

    2016-09-16

    Probiotics are live microorganisms which when administered in adequate amounts confer a health benefit on the host. Many strains exert their beneficial effects after transiently colonizing the human gut, where they interact with the rest of the intestinal microorganisms and with the host mucosa. Indeed the human gut harbours a huge number of microorganisms also known as gut microbiota. Imbalances in the relative abundances of the individual components of the gut microbiota may determine the health status of the host and alterations in specific groups have been related to different diseases and metabolic disorders. Proteomics provide a set of high-throughput methodologies for protein identification that are extremely useful for studying probiotic functionality and helping in the assessment of specific health-promoting activities, such as their immunomodulatory activity, the intestinal colonization processes, and the crosstalk mechanisms with the host. Furthermore, proteomics have been used to identify markers of technological performance and stress adaptation, which helps to predict traits such as behaviour into food matrices and ability to survive passage through the gastrointestinal tract. The aim of this review is to compile studies in which proteomics have been used to assess probiotic functionality and to identify molecular players supporting their mechanisms of action. Probiotics are live microorganisms which when administered in adequate amounts confer a health benefit on the host. Molecular basis underlying the functional properties of probiotic bacteria responsible for the health promoting effects have been in the background for many years. Breakthrough of omics technologies in the probiotic and microbiota fields has had a very relevant impact in the elucidation of probiotic mechanisms and in the procedures to select these microorganisms, based on solid scientific evidence. It is unquestionable that, in the near future, the evolution of proteomic techniques

  4. Assessment of network perturbation amplitudes by applying high-throughput data to causal biological networks

    Directory of Open Access Journals (Sweden)

    Martin Florian

    2012-05-01

    Full Text Available Abstract Background High-throughput measurement technologies produce data sets that have the potential to elucidate the biological impact of disease, drug treatment, and environmental agents on humans. The scientific community faces an ongoing challenge in the analysis of these rich data sources to more accurately characterize biological processes that have been perturbed at the mechanistic level. Here, a new approach is built on previous methodologies in which high-throughput data was interpreted using prior biological knowledge of cause and effect relationships. These relationships are structured into network models that describe specific biological processes, such as inflammatory signaling or cell cycle progression. This enables quantitative assessment of network perturbation in response to a given stimulus. Results Four complementary methods were devised to quantify treatment-induced activity changes in processes described by network models. In addition, companion statistics were developed to qualify significance and specificity of the results. This approach is called Network Perturbation Amplitude (NPA scoring because the amplitudes of treatment-induced perturbations are computed for biological network models. The NPA methods were tested on two transcriptomic data sets: normal human bronchial epithelial (NHBE cells treated with the pro-inflammatory signaling mediator TNFα, and HCT116 colon cancer cells treated with the CDK cell cycle inhibitor R547. Each data set was scored against network models representing different aspects of inflammatory signaling and cell cycle progression, and these scores were compared with independent measures of pathway activity in NHBE cells to verify the approach. The NPA scoring method successfully quantified the amplitude of TNFα-induced perturbation for each network model when compared against NF-κB nuclear localization and cell number. In addition, the degree and specificity to which CDK

  5. elegantRingAnalysis An Interface for High-Throughput Analysis of Storage Ring Lattices Using elegant

    CERN Document Server

    Borland, Michael

    2005-01-01

    The code {\\tt elegant} is widely used for simulation of linacs for drivers for free-electron lasers. Less well known is that elegant is also a very capable code for simulation of storage rings. In this paper, we show a newly-developed graphical user interface that allows the user to easily take advantage of these capabilities. The interface is designed for use on a Linux cluster, providing very high throughput. It can also be used on a single computer. Among the features it gives access to are basic calculations (Twiss parameters, radiation integrals), phase-space tracking, nonlinear dispersion, dynamic aperture (on- and off-momentum), frequency map analysis, and collective effects (IBS, bunch-lengthening). Using a cluster, it is easy to get highly detailed dynamic aperture and frequency map results in a surprisingly short time.

  6. Virtual high screening throughput and design of 14α-lanosterol ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-07-06

    Jul 6, 2009 ... Virtual high screening throughput and design of. 14α-lanosterol demethylase inhibitors against. Mycobacterium tuberculosis. Hildebert B. Maurice1*, Esther Tuarira1 and Kennedy Mwambete2. 1School of Pharmaceutical Sciences, Institute of Allied Health Sciences, Muhimbili University of Health and.

  7. High-resolution proteome maps of Bacillus licheniformis cells growing in minimal medium.

    Science.gov (United States)

    Voigt, Birgit; Albrecht, Dirk; Sievers, Susanne; Becher, Dörte; Bongaerts, Johannes; Evers, Stefan; Schweder, Thomas; Maurer, Karl-Heinz; Hecker, Michael

    2015-08-01

    Bacillus licheniformis is an important host for the industrial production of enzymes mainly because of its ability to secrete large amounts of protein. We analyzed the proteome of B. licheniformis cells growing in a minimal medium. Beside the cytosolic proteome, the membrane and the extracellular proteome were studied. We could identify 1470 proteins; 1168 proteins were classified as cytosolic proteins, 195 proteins with membrane-spanning domains were classified as membrane proteins, and 107 proteins, with either putative signals peptides or flagellin-like sequences, were classified as secreted proteins. The identified proteins were grouped into functional categories and used to reconstruct cellular functions and metabolic pathways of growing B. licheniformis cells. The largest group was proteins with functions in basic metabolic pathways such as carbon metabolism, amino acid and nucleotide synthesis and synthesis of fatty acids and cofactors. Many proteins detected were involved in DNA replication, transcription, and translation. Furthermore, a high number of proteins employed in the transport of a wide variety of compounds were found to be expressed in the cells. All MS data have been deposited in the ProteomeXchange with identifier PXD000791 (http://proteomecentral.proteomexchange.org/dataset/PXD000791). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dongfang Li

    2015-10-01

    Full Text Available Random number generators (RNG play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST randomness tests and is resilient to a wide range of security attacks.

  9. PUFKEY: a high-security and high-throughput hardware true random number generator for sensor networks.

    Science.gov (United States)

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-10-16

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.

  10. Volunteer Computing for Science Gateways

    OpenAIRE

    Anderson, David

    2017-01-01

    This poster offers information about volunteer computing for science gateways that offer high-throughput computing services. Volunteer computing can be used to get computing power. This increases the visibility of the gateway to the general public as well as increasing computing capacity at little cost.

  11. A DIGE proteomic analysis for high-intensity exercise-trained rat skeletal muscle.

    Science.gov (United States)

    Yamaguchi, Wataru; Fujimoto, Eri; Higuchi, Mitsuru; Tabata, Izumi

    2010-09-01

    Exercise training induces various adaptations in skeletal muscles. However, the mechanisms remain unclear. In this study, we conducted 2D-DIGE proteomic analysis, which has not yet been used for elucidating adaptations of skeletal muscle after high-intensity exercise training (HIT). For 5 days, rats performed HIT, which consisted of 14 20-s swimming exercise bouts carrying a weight (14% of the body weight), and 10-s pause between bouts. The 2D-DIGE analysis was conducted on epitrochlearis muscles excised 18 h after the final training exercise. Proteomic profiling revealed that out of 800 detected and matched spots, 13 proteins exhibited changed expression by HIT compared with sedentary rats. All proteins were identified by MALDI-TOF/MS. Furthermore, using western immunoblot analyses, significantly changed expressions of NDUFS1 and parvalbumin (PV) were validated in relation to HIT. In conclusion, the proteomic 2D-DIGE analysis following HIT-identified expressions of NDUFS1 and PV, previously unknown to have functions related to exercise-training adaptations.

  12. Inference of RhoGAP/GTPase regulation using single-cell morphological data from a combinatorial RNAi screen

    OpenAIRE

    Nir, Oaz; Bakal, Chris; Perrimon, Norbert; Berger, Bonnie

    2010-01-01

    Biological networks are highly complex systems, consisting largely of enzymes that act as molecular switches to activate/inhibit downstream targets via post-translational modification. Computational techniques have been developed to perform signaling network inference using some high-throughput data sources, such as those generated from transcriptional and proteomic studies, but comparable methods have not been developed to use high-content morphological data, which are emerging principally f...

  13. Chromatographic Monoliths for High-Throughput Immunoaffinity Isolation of Transferrin from Human Plasma

    Directory of Open Access Journals (Sweden)

    Irena Trbojević-Akmačić

    2016-06-01

    Full Text Available Changes in protein glycosylation are related to different diseases and have a potential as diagnostic and prognostic disease biomarkers. Transferrin (Tf glycosylation changes are common marker for congenital disorders of glycosylation. However, biological interindividual variability of Tf N-glycosylation and genes involved in glycosylation regulation are not known. Therefore, high-throughput Tf isolation method and large scale glycosylation studies are needed in order to address these questions. Due to their unique chromatographic properties, the use of chromatographic monoliths enables very fast analysis cycle, thus significantly increasing sample preparation throughput. Here, we are describing characterization of novel immunoaffinity-based monolithic columns in a 96-well plate format for specific high-throughput purification of human Tf from blood plasma. We optimized the isolation and glycan preparation procedure for subsequent ultra performance liquid chromatography (UPLC analysis of Tf N-glycosylation and managed to increase the sensitivity for approximately three times compared to initial experimental conditions, with very good reproducibility. This work is licensed under a Creative Commons Attribution 4.0 International License.

  14. Differential alkylation-based redox proteomics--Lessons learnt.

    Science.gov (United States)

    Wojdyla, Katarzyna; Rogowska-Wrzesinska, Adelina

    2015-12-01

    Cysteine is one of the most reactive amino acids. This is due to the electronegativity of sulphur atom in the side chain of thiolate group. It results in cysteine being present in several distinct redox forms inside the cell. Amongst these, reversible oxidations, S-nitrosylation and S-sulfenylation are crucial mediators of intracellular redox signalling, with known associations to health and disease. Study of their functionalities has intensified thanks to the development of various analytical strategies, with particular contribution from differential alkylation-based proteomics methods. Presented here is a critical evaluation of differential alkylation-based strategies for the analysis of S-nitrosylation and S-sulfenylation. The aim is to assess the current status and to provide insights for future directions in the dynamically evolving field of redox proteomics. To achieve that we collected 35 original research articles published since 2010 and analysed them considering the following parameters, (i) resolution of modification site, (ii) quantitative information, including correction of modification levels by protein abundance changes and determination of modification site occupancy, (iii) throughput, including the amount of starting material required for analysis. The results of this meta-analysis are the core of this review, complemented by issues related to biological models and sample preparation in redox proteomics, including conditions for free thiol blocking and labelling of target cysteine oxoforms. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Combined comparative and chemical proteomics on the mechanisms of levo-tetrahydropalmatine-induced antinociception in the formalin test.

    Science.gov (United States)

    Wang, Chen; Zhou, Jiangrui; Wang, Shuowen; Ye, Mingliang; Jiang, Chunlei; Fan, Guorong; Zou, Hanfa

    2010-06-04

    This study investigated the mechanisms involved in the antinociceptive action induced by levo-tetrahydropalmatine (l-THP) in the formalin test by combined comparative and chemical proteomics. Rats were pretreated with l-THP by the oral route (40 mg/kg) 1 h before formalin injection. The antinociceptive effect of l-THP was shown in the first and second phases of the formalin test. To address the mechanisms by which l-THP inhibits formalin-induced nociception in rats, the combined comparative and chemical proteomics were applied. A novel high-throughput comparative proteomic approach based on 2D-nano-LC-MS/MS was applied to simultaneously evaluate the deregulated proteins involved in the response of l-THP treatment in formalin-induced pain rats. Thousands of proteins were identified, among which 17 proteins survived the stringent filter criteria and were further included for functional discussion. Two proteins (Neurabin-1 and Calcium-dependent secretion activator 1) were randomly selected, and their expression levels were further confirmed by Western Blots. The results matched well with those of proteomics. In the present study, we also described the development and application of l-THP immobilized beads to bind the targets. Following incubation with cellular lysates, the proteome interacting with the fixed l-THP was identified. The results of comparative and chemical proteomics were quite complementary. Although the precise roles of these identified moleculars in l-THP-induced antinociception need further study, the combined results indicated that proteins associated with signal transduction, vesicular trafficking and neurotransmitter release, energy metabolism, and ion transport play important roles in l-THP-induced antinociception in the formalin test.

  16. High-throughput verification of transcriptional starting sites by Deep-RACE

    DEFF Research Database (Denmark)

    Olivarius, Signe; Plessy, Charles; Carninci, Piero

    2009-01-01

    We present a high-throughput method for investigating the transcriptional starting sites of genes of interest, which we named Deep-RACE (Deep–rapid amplification of cDNA ends). Taking advantage of the latest sequencing technology, it allows the parallel analysis of multiple genes and is free...

  17. High-throughput microfluidics automated cytogenetic processing for effectively lowering biological process time and aid triage during radiation accidents

    International Nuclear Information System (INIS)

    Ramakumar, Adarsh

    2016-01-01

    Nuclear or radiation mass casualties require individual, rapid, and accurate dose-based triage of exposed subjects for cytokine therapy and supportive care, to save life. Radiation mass casualties will demand high-throughput individual diagnostic dose assessment for medical management of exposed subjects. Cytogenetic techniques are widely used for triage and definitive radiation biodosimetry. Prototype platform to demonstrate high-throughput microfluidic micro incubation to support the logistics of sample in miniaturized incubators from the site of accident to analytical labs has been developed. Efforts have been made, both at the level of developing concepts and advanced system for higher throughput in processing the samples and also implementing better and efficient methods of logistics leading to performance of lab-on-chip analyses. Automated high-throughput platform with automated feature extraction, storage, cross platform data linkage, cross platform validation and inclusion of multi-parametric biomarker approaches will provide the first generation high-throughput platform systems for effective medical management, particularly during radiation mass casualty events

  18. A High-Throughput Biological Calorimetry Core: Steps to Startup, Run, and Maintain a Multiuser Facility.

    Science.gov (United States)

    Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C

    2016-01-01

    Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.

  19. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    Science.gov (United States)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  20. Development of Control Applications for High-Throughput Protein Crystallography Experiments

    International Nuclear Information System (INIS)

    Gaponov, Yurii A.; Matsugaki, Naohiro; Honda, Nobuo; Sasajima, Kumiko; Igarashi, Noriyuki; Hiraki, Masahiko; Yamada, Yusuke; Wakatsuki, Soichi

    2007-01-01

    An integrated client-server control system (PCCS) with a unified relational database (PCDB) has been developed for high-throughput protein crystallography experiments on synchrotron beamlines. The major steps in protein crystallographic experiments (purification, crystallization, crystal harvesting, data collection, and data processing) are integrated into the software. All information necessary for performing protein crystallography experiments is stored in the PCDB database (except raw X-ray diffraction data, which is stored in the Network File Server). To allow all members of a protein crystallography group to participate in experiments, the system was developed as a multi-user system with secure network access based on TCP/IP secure UNIX sockets. Secure remote access to the system is possible from any operating system with X-terminal and SSH/X11 (Secure Shell with graphical user interface) support. Currently, the system covers the high-throughput X-ray data collection stages and is being commissioned at BL5A and NW12A (PF, PF-AR, KEK, Tsukuba, Japan)

  1. Fluorescence-based high-throughput screening of dicer cleavage activity

    Czech Academy of Sciences Publication Activity Database

    Podolská, Kateřina; Sedlák, David; Bartůněk, Petr; Svoboda, Petr

    2014-01-01

    Roč. 19, č. 3 (2014), s. 417-426 ISSN 1087-0571 R&D Projects: GA ČR GA13-29531S; GA MŠk(CZ) LC06077; GA MŠk LM2011022 Grant - others:EMBO(DE) 1483 Institutional support: RVO:68378050 Keywords : Dicer * siRNA * high-throughput screening Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.423, year: 2014

  2. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  3. A high-throughput investigation of Fe-Cr-Al as a novel high-temperature coating for nuclear cladding materials.

    Science.gov (United States)

    Bunn, Jonathan Kenneth; Fang, Randy L; Albing, Mark R; Mehta, Apurva; Kramer, Matthew J; Besser, Matthew F; Hattrick-Simpers, Jason R

    2015-07-10

    High-temperature alloy coatings that can resist oxidation are urgently needed as nuclear cladding materials to mitigate the danger of hydrogen explosions during meltdown. Here we apply a combination of computationally guided materials synthesis, high-throughput structural characterization and data analysis tools to investigate the feasibility of coatings from the Fe–Cr–Al alloy system. Composition-spread samples were synthesized to cover the region of the phase diagram previous bulk studies have identified as forming protective oxides. The metallurgical and oxide phase evolution were studied via in situ synchrotron glancing incidence x-ray diffraction at temperatures up to 690 K. A composition region with an Al concentration greater than 3.08 at%, and between 20.0 at% and 32.9 at% Cr showed the least overall oxide growth. Subsequently, a series of samples were deposited on stubs and their oxidation behavior at 1373 K was observed. The continued presence of a passivating oxide was confirmed in this region over a period of 6 h.

  4. An Unsupervised kNN Method to Systematically Detect Changes in Protein Localization in High-Throughput Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Alex Xijie Lu

    Full Text Available Despite the importance of characterizing genes that exhibit subcellular localization changes between conditions in proteome-wide imaging experiments, many recent studies still rely upon manual evaluation to assess the results of high-throughput imaging experiments. We describe and demonstrate an unsupervised k-nearest neighbours method for the detection of localization changes. Compared to previous classification-based supervised change detection methods, our method is much simpler and faster, and operates directly on the feature space to overcome limitations in needing to manually curate training sets that may not generalize well between screens. In addition, the output of our method is flexible in its utility, generating both a quantitatively ranked list of localization changes that permit user-defined cut-offs, and a vector for each gene describing feature-wise direction and magnitude of localization changes. We demonstrate that our method is effective at the detection of localization changes using the Δrpd3 perturbation in Saccharomyces cerevisiae, where we capture 71.4% of previously known changes within the top 10% of ranked genes, and find at least four new localization changes within the top 1% of ranked genes. The results of our analysis indicate that simple unsupervised methods may be able to identify localization changes in images without laborious manual image labelling steps.

  5. An Unsupervised kNN Method to Systematically Detect Changes in Protein Localization in High-Throughput Microscopy Images.

    Science.gov (United States)

    Lu, Alex Xijie; Moses, Alan M

    2016-01-01

    Despite the importance of characterizing genes that exhibit subcellular localization changes between conditions in proteome-wide imaging experiments, many recent studies still rely upon manual evaluation to assess the results of high-throughput imaging experiments. We describe and demonstrate an unsupervised k-nearest neighbours method for the detection of localization changes. Compared to previous classification-based supervised change detection methods, our method is much simpler and faster, and operates directly on the feature space to overcome limitations in needing to manually curate training sets that may not generalize well between screens. In addition, the output of our method is flexible in its utility, generating both a quantitatively ranked list of localization changes that permit user-defined cut-offs, and a vector for each gene describing feature-wise direction and magnitude of localization changes. We demonstrate that our method is effective at the detection of localization changes using the Δrpd3 perturbation in Saccharomyces cerevisiae, where we capture 71.4% of previously known changes within the top 10% of ranked genes, and find at least four new localization changes within the top 1% of ranked genes. The results of our analysis indicate that simple unsupervised methods may be able to identify localization changes in images without laborious manual image labelling steps.

  6. High throughput electrophysiology: new perspectives for ion channel drug discovery

    DEFF Research Database (Denmark)

    Willumsen, Niels J; Bech, Morten; Olesen, Søren-Peter

    2003-01-01

    . A cornerstone in current drug discovery is high throughput screening assays which allow examination of the activity of specific ion channels though only to a limited extent. Conventional patch clamp remains the sole technique with sufficiently high time resolution and sensitivity required for precise and direct....... The introduction of new powerful HTS electrophysiological techniques is predicted to cause a revolution in ion channel drug discovery....

  7. Novel high-throughput cell-based hybridoma screening methodology using the Celigo Image Cytometer.

    Science.gov (United States)

    Zhang, Haohai; Chan, Leo Li-Ying; Rice, William; Kassam, Nasim; Longhi, Maria Serena; Zhao, Haitao; Robson, Simon C; Gao, Wenda; Wu, Yan

    2017-08-01

    Hybridoma screening is a critical step for antibody discovery, which necessitates prompt identification of potential clones from hundreds to thousands of hybridoma cultures against the desired immunogen. Technical issues associated with ELISA- and flow cytometry-based screening limit accuracy and diminish high-throughput capability, increasing time and cost. Conventional ELISA screening with coated antigen is also impractical for difficult-to-express hydrophobic membrane antigens or multi-chain protein complexes. Here, we demonstrate novel high-throughput screening methodology employing the Celigo Image Cytometer, which avoids nonspecific signals by contrasting antibody binding signals directly on living cells, with and without recombinant antigen expression. The image cytometry-based high-throughput screening method was optimized by detecting the binding of hybridoma supernatants to the recombinant antigen CD39 expressed on Chinese hamster ovary (CHO) cells. Next, the sensitivity of the image cytometer was demonstrated by serial dilution of purified CD39 antibody. Celigo was used to measure antibody affinities of commercial and in-house antibodies to membrane-bound CD39. This cell-based screening procedure can be completely accomplished within one day, significantly improving throughput and efficiency of hybridoma screening. Furthermore, measuring direct antibody binding to living cells eliminated both false positive and false negative hits. The image cytometry method was highly sensitive and versatile, and could detect positive antibody in supernatants at concentrations as low as ~5ng/mL, with concurrent K d binding affinity coefficient determination. We propose that this screening method will greatly facilitate antibody discovery and screening technologies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Rapid dopaminergic modulation of the fish hypothalamic transcriptome and proteome.

    Directory of Open Access Journals (Sweden)

    Jason T Popesku

    2010-08-01

    Full Text Available Dopamine (DA is a major neurotransmitter playing an important role in the regulation of vertebrate reproduction. We developed a novel method for the comparison of transcriptomic and proteomic data obtained from in vivo experiments designed to study the neuroendocrine actions of DA.Female goldfish were injected (i.p. with DA agonists (D1-specific; SKF 38393, or D2-specific; LY 171555 and sacrificed after 5 h. Serum LH levels were reduced by 57% and 75% by SKF 38393 and LY 171555, respectively, indicating that the treatments produced physiologically relevant responses in vivo. Bioinformatic strategies and a ray-finned fish database were established for microarray and iTRAQ proteomic analysis of the hypothalamus, revealing a total of 3088 mRNAs and 42 proteins as being differentially regulated by the treatments. Twenty one proteins and mRNAs corresponding to these proteins appeared on both lists. Many of the mRNAs and proteins affected by the treatments were grouped into the Gene Ontology categorizations of protein complex, signal transduction, response to stimulus, and regulation of cellular processes. There was a 57% and 14% directional agreement between the differentially-regulated mRNAs and proteins for SKF 38393 and LY 171555, respectively.The results demonstrate the applicability of advanced high-throughput genomic and proteomic analyses in an amendable well-studied teleost model species whose genome has yet to be sequenced. We demonstrate that DA rapidly regulates multiple hypothalamic pathways and processes that are also known to be involved in pathologies of the central nervous system.

  9. A High-Resolution Proteomic Landscaping of Primary Human Dental Stem Cells: Identification of SHED- and PDLSC-Specific Biomarkers

    Directory of Open Access Journals (Sweden)

    Vasiliki Taraslia

    2018-01-01

    Full Text Available Dental stem cells (DSCs have emerged as a promising tool for basic research and clinical practice. A variety of adult stem cell (ASC populations can be isolated from different areas within the dental tissue, which, due to their cellular and molecular characteristics, could give rise to different outcomes when used in potential applications. In this study, we performed a high-throughput molecular comparison of two primary human adult dental stem cell (hADSC sub-populations: Stem Cells from Human Exfoliated Deciduous Teeth (SHEDs and Periodontal Ligament Stem Cells (PDLSCs. A detailed proteomic mapping of SHEDs and PDLSCs, via employment of nano-LC tandem-mass spectrometry (MS/MS revealed 2032 identified proteins in SHEDs and 3235 in PDLSCs. In total, 1516 proteins were expressed in both populations, while 517 were unique for SHEDs and 1721 were exclusively expressed in PDLSCs. Further analysis of the recorded proteins suggested that SHEDs predominantly expressed molecules that are involved in organizing the cytoskeletal network, cellular migration and adhesion, whereas PDLSCs are highly energy-producing cells, vastly expressing proteins that are implicated in various aspects of cell metabolism and proliferation. Applying the Rho-GDI signaling pathway as a paradigm, we propose potential biomarkers for SHEDs and for PDLSCs, reflecting their unique features, properties and engaged molecular pathways.

  10. High-throughput proteomic characterization of plasma rich in growth factors (PRGF-Endoret)-derived fibrin clot interactome.

    Science.gov (United States)

    Anitua, Eduardo; Prado, Roberto; Azkargorta, Mikel; Rodriguez-Suárez, Eva; Iloro, Ibon; Casado-Vela, Juan; Elortza, Felix; Orive, Gorka

    2015-11-01

    Plasma rich in growth factors (PRGF®-Endoret®) is an autologous technology that contains a set of proteins specifically addressed to wound healing and tissue regeneration. The scaffold formed by using this technology is a clot mainly composed of fibrin protein, forming a three-dimensional (3D) macroscopic network. This biomaterial is easily obtained by biotechnological means from blood and can be used in a range of situations to help wound healing and tissue regeneration. Although the main constituent of this clot is the fibrin scaffold, little is known about other proteins interacting in this clot that may act as adjuvants in the healing process. The aim of this study was to characterize the proteins enclosed by PRGF-Endoret scaffold, using a double-proteomic approach that combines 1D-SDS-PAGE approach followed by LC-MS/MS, and 2-DE followed by MALDI-TOF/TOF. The results presented here provide a description of the catalogue of key proteins in close contact with the fibrin scaffold. The obtained lists of proteins were grouped into families and networks according to gene ontology. Taken together, an enrichment of both proteins and protein families specifically involved in tissue regeneration and wound healing has been found. Copyright © 2013 John Wiley & Sons, Ltd.

  11. imFASP: An integrated approach combining in-situ filter-aided sample pretreatment with microwave-assisted protein digestion for fast and efficient proteome sample preparation.

    Science.gov (United States)

    Zhao, Qun; Fang, Fei; Wu, Ci; Wu, Qi; Liang, Yu; Liang, Zhen; Zhang, Lihua; Zhang, Yukui

    2016-03-17

    An integrated sample preparation method, termed "imFASP", which combined in-situ filter-aided sample pretreatment and microwave-assisted trypsin digestion, was developed for preparation of microgram and even nanogram amounts of complex protein samples with high efficiency in 1 h. For imFASP method, proteins dissolved in 8 M urea were loaded onto a filter device with molecular weight cut off (MWCO) as 10 kDa, followed by in-situ protein preconcentration, denaturation, reduction, alkylation, and microwave-assisted tryptic digestion. Compared with traditional in-solution sample preparation method, imFASP method generated more protein and peptide identifications (IDs) from preparation of 45 μg Escherichia coli protein sample due to the higher efficiency, and the sample preparation throughput was significantly improved by 14 times (1 h vs. 15 h). More importantly, when the starting amounts of E. coli cell lysate decreased to nanogram level (50-500 ng), the protein and peptide identified by imFASP method were improved at least 30% and 44%, compared with traditional in-solution preparation method, suggesting dramatically higher peptide recovery of imFASP method for trace amounts of complex proteome samples. All these results demonstrate that the imFASP method developed here is of high potential for high efficient and high throughput preparation of trace amounts of complex proteome samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Toward reliable and repeatable automated STEM-EDS metrology with high throughput

    Science.gov (United States)

    Zhong, Zhenxin; Donald, Jason; Dutrow, Gavin; Roller, Justin; Ugurlu, Ozan; Verheijen, Martin; Bidiuk, Oleksii

    2018-03-01

    New materials and designs in complex 3D architectures in logic and memory devices have raised complexity in S/TEM metrology. In this paper, we report about a newly developed, automated, scanning transmission electron microscopy (STEM) based, energy dispersive X-ray spectroscopy (STEM-EDS) metrology method that addresses these challenges. Different methodologies toward repeatable and efficient, automated STEM-EDS metrology with high throughput are presented: we introduce the best known auto-EDS acquisition and quantification methods for robust and reliable metrology and present how electron exposure dose impacts the EDS metrology reproducibility, either due to poor signalto-noise ratio (SNR) at low dose or due to sample modifications at high dose conditions. Finally, we discuss the limitations of the STEM-EDS metrology technique and propose strategies to optimize the process both in terms of throughput and metrology reliability.

  13. Modeling Disordered Materials with a High Throughput ab-initio Approach

    Science.gov (United States)

    2015-11-13

    Modeling Disordered Materials with a High Throughput ab - initio Approach Kesong Yang,1 Corey Oses,2 and Stefano Curtarolo3, 4 1Department of...J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54, 11169–11186 (1996

  14. Proteomics in medical microbiology.

    Science.gov (United States)

    Cash, P

    2000-04-01

    The techniques of proteomics (high resolution two-dimensional electrophoresis and protein characterisation) are widely used for microbiological research to analyse global protein synthesis as an indicator of gene expression. The rapid progress in microbial proteomics has been achieved through the wide availability of whole genome sequences for a number of bacterial groups. Beyond providing a basic understanding of microbial gene expression, proteomics has also played a role in medical areas of microbiology. Progress has been made in the use of the techniques for investigating the epidemiology and taxonomy of human microbial pathogens, the identification of novel pathogenic mechanisms and the analysis of drug resistance. In each of these areas, proteomics has provided new insights that complement genomic-based investigations. This review describes the current progress in these research fields and highlights some of the technical challenges existing for the application of proteomics in medical microbiology. The latter concern the analysis of genetically heterogeneous bacterial populations and the integration of the proteomic and genomic data for these bacteria. The characterisation of the proteomes of bacterial pathogens growing in their natural hosts remains a future challenge.

  15. High-throughput, temperature-controlled microchannel acoustophoresis device made with rapid prototyping

    DEFF Research Database (Denmark)

    Adams, Jonathan D; Ebbesen, Christian L.; Barnkob, Rune

    2012-01-01

    -slide format using low-cost, rapid-prototyping techniques. This high-throughput acoustophoresis chip (HTAC) utilizes a temperature-stabilized, standing ultrasonic wave, which imposes differential acoustic radiation forces that can separate particles according to size, density and compressibility. The device...

  16. A Functional High-Throughput Assay of Myelination in Vitro

    Science.gov (United States)

    2014-07-01

    Human induced pluripotent stem cells, hydrogels, 3D culture, electrophysiology, high-throughput assay 16. SECURITY CLASSIFICATION OF: 17...image the 3D rat dorsal root ganglion ( DRG ) cultures with sufficiently low background as to detect electrically-evoked depolarization events, as...of voltage-sensitive dyes. 8    We have made substantial progress in Task 4.1. We have fabricated neural fiber tracts from DRG explants and

  17. mIMT-visHTS: A novel method for multiplexing isobaric mass tagged datasets with an accompanying visualization high throughput screening tool for protein profiling.

    Science.gov (United States)

    Ricchiuto, Piero; Iwata, Hiroshi; Yabusaki, Katsumi; Yamada, Iwao; Pieper, Brett; Sharma, Amitabh; Aikawa, Masanori; Singh, Sasha A

    2015-10-14

    Isobaric mass tagging (IMT) methods enable the analysis of thousands of proteins simultaneously. We used tandem mass tagging reagents (TMT™) to monitor the relative changes in the proteome of the mouse macrophage cell line RAW264.7 at the same six time points after no stimulation (baseline phenotype), stimulation with interferon gamma (pro-inflammatory phenotype) or stimulation with interleukin-4 (anti-inflammatory phenotype). The combined TMT datasets yielded nearly 12,000 protein profiles for comparison. To facilitate this large analysis, we developed a novel method that combines or multiplexes the separate IMT (mIMT) datasets into a single super dataset for subsequent model-based clustering and co-regulation analysis. Specially designed visual High Throughput Screening (visHTS) software screened co-regulated proteins. visHTS generates an interactive and visually intuitive color-coded bullseye plot that enables users to browse the cluster outputs and identify co-regulated proteins. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. High-throughput anisotropic plasma etching of polyimide for MEMS

    International Nuclear Information System (INIS)

    Bliznetsov, Vladimir; Manickam, Anbumalar; Ranganathan, Nagarajan; Chen, Junwei

    2011-01-01

    This note describes a new high-throughput process of polyimide etching for the fabrication of MEMS devices with an organic sacrificial layer approach. Using dual frequency superimposed capacitively coupled plasma we achieved a vertical profile of polyimide with an etching rate as high as 3.5 µm min −1 . After the fabrication of vertical structures in a polyimide material, additional steps were performed to fabricate structural elements of MEMS by deposition of a SiO 2 layer and performing release etching of polyimide. (technical note)

  19. Identification of lactoferricin B intracellular targets using an Escherichia coli proteome chip.

    Science.gov (United States)

    Tu, Yu-Hsuan; Ho, Yu-Hsuan; Chuang, Ying-Chih; Chen, Po-Chung; Chen, Chien-Sheng

    2011-01-01

    Lactoferricin B (LfcinB) is a well-known antimicrobial peptide. Several studies have indicated that it can inhibit bacteria by affecting intracellular activities, but the intracellular targets of this antimicrobial peptide have not been identified. Therefore, we used E. coli proteome chips to identify the intracellular target proteins of LfcinB in a high-throughput manner. We probed LfcinB with E. coli proteome chips and further conducted normalization and Gene Ontology (GO) analyses. The results of the GO analyses showed that the identified proteins were associated with metabolic processes. Moreover, we validated the interactions between LfcinB and chip assay-identified proteins with fluorescence polarization (FP) assays. Sixteen proteins were identified, and an E. coli interaction database (EcID) analysis revealed that the majority of the proteins that interact with these 16 proteins affected the tricarboxylic acid (TCA) cycle. Knockout assays were conducted to further validate the FP assay results. These results showed that phosphoenolpyruvate carboxylase was a target of LfcinB, indicating that one of its mechanisms of action may be associated with pyruvate metabolism. Thus, we used pyruvate assays to conduct an in vivo validation of the relationship between LfcinB and pyruvate level in E. coli. These results showed that E. coli exposed to LfcinB had abnormal pyruvate amounts, indicating that LfcinB caused an accumulation of pyruvate. In conclusion, this study successfully revealed the intracellular targets of LfcinB using an E. coli proteome chip approach.

  20. Identification of lactoferricin B intracellular targets using an Escherichia coli proteome chip.

    Directory of Open Access Journals (Sweden)

    Yu-Hsuan Tu

    Full Text Available Lactoferricin B (LfcinB is a well-known antimicrobial peptide. Several studies have indicated that it can inhibit bacteria by affecting intracellular activities, but the intracellular targets of this antimicrobial peptide have not been identified. Therefore, we used E. coli proteome chips to identify the intracellular target proteins of LfcinB in a high-throughput manner. We probed LfcinB with E. coli proteome chips and further conducted normalization and Gene Ontology (GO analyses. The results of the GO analyses showed that the identified proteins were associated with metabolic processes. Moreover, we validated the interactions between LfcinB and chip assay-identified proteins with fluorescence polarization (FP assays. Sixteen proteins were identified, and an E. coli interaction database (EcID analysis revealed that the majority of the proteins that interact with these 16 proteins affected the tricarboxylic acid (TCA cycle. Knockout assays were conducted to further validate the FP assay results. These results showed that phosphoenolpyruvate carboxylase was a target of LfcinB, indicating that one of its mechanisms of action may be associated with pyruvate metabolism. Thus, we used pyruvate assays to conduct an in vivo validation of the relationship between LfcinB and pyruvate level in E. coli. These results showed that E. coli exposed to LfcinB had abnormal pyruvate amounts, indicating that LfcinB caused an accumulation of pyruvate. In conclusion, this study successfully revealed the intracellular targets of LfcinB using an E. coli proteome chip approach.