WorldWideScience

Sample records for improved microarray-based decision

  1. Improved microarray-based decision support with graph encoded interactome data.

    Directory of Open Access Journals (Sweden)

    Anneleen Daemen

    Full Text Available In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG, protein-protein interactions (OPHID and miRNA-gene targeting (microRNA.org outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.

  2. Cross-platform analysis of cancer microarray data improves gene expression based classification of phenotypes

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2005-11-01

    Full Text Available Abstract Background The extensive use of DNA microarray technology in the characterization of the cell transcriptome is leading to an ever increasing amount of microarray data from cancer studies. Although similar questions for the same type of cancer are addressed in these different studies, a comparative analysis of their results is hampered by the use of heterogeneous microarray platforms and analysis methods. Results In contrast to a meta-analysis approach where results of different studies are combined on an interpretative level, we investigate here how to directly integrate raw microarray data from different studies for the purpose of supervised classification analysis. We use median rank scores and quantile discretization to derive numerically comparable measures of gene expression from different platforms. These transformed data are then used for training of classifiers based on support vector machines. We apply this approach to six publicly available cancer microarray gene expression data sets, which consist of three pairs of studies, each examining the same type of cancer, i.e. breast cancer, prostate cancer or acute myeloid leukemia. For each pair, one study was performed by means of cDNA microarrays and the other by means of oligonucleotide microarrays. In each pair, high classification accuracies (> 85% were achieved with training and testing on data instances randomly chosen from both data sets in a cross-validation analysis. To exemplify the potential of this cross-platform classification analysis, we use two leukemia microarray data sets to show that important genes with regard to the biology of leukemia are selected in an integrated analysis, which are missed in either single-set analysis. Conclusion Cross-platform classification of multiple cancer microarray data sets yields discriminative gene expression signatures that are found and validated on a large number of microarray samples, generated by different laboratories and

  3. BASE - 2nd generation software for microarray data management and analysis

    Directory of Open Access Journals (Sweden)

    Nordborg Nicklas

    2009-10-01

    Full Text Available Abstract Background Microarray experiments are increasing in size and samples are collected asynchronously over long time. Available data are re-analysed as more samples are hybridized. Systematic use of collected data requires tracking of biomaterials, array information, raw data, and assembly of annotations. To meet the information tracking and data analysis challenges in microarray experiments we reimplemented and improved BASE version 1.2. Results The new BASE presented in this report is a comprehensive annotable local microarray data repository and analysis application providing researchers with an efficient information management and analysis tool. The information management system tracks all material from biosource, via sample and through extraction and labelling to raw data and analysis. All items in BASE can be annotated and the annotations can be used as experimental factors in downstream analysis. BASE stores all microarray experiment related data regardless if analysis tools for specific techniques or data formats are readily available. The BASE team is committed to continue improving and extending BASE to make it usable for even more experimental setups and techniques, and we encourage other groups to target their specific needs leveraging on the infrastructure provided by BASE. Conclusion BASE is a comprehensive management application for information, data, and analysis of microarray experiments, available as free open source software at http://base.thep.lu.se under the terms of the GPLv3 license.

  4. BASE--2nd generation software for microarray data management and analysis.

    Science.gov (United States)

    Vallon-Christersson, Johan; Nordborg, Nicklas; Svensson, Martin; Häkkinen, Jari

    2009-10-12

    Microarray experiments are increasing in size and samples are collected asynchronously over long time. Available data are re-analysed as more samples are hybridized. Systematic use of collected data requires tracking of biomaterials, array information, raw data, and assembly of annotations. To meet the information tracking and data analysis challenges in microarray experiments we reimplemented and improved BASE version 1.2. The new BASE presented in this report is a comprehensive annotable local microarray data repository and analysis application providing researchers with an efficient information management and analysis tool. The information management system tracks all material from biosource, via sample and through extraction and labelling to raw data and analysis. All items in BASE can be annotated and the annotations can be used as experimental factors in downstream analysis. BASE stores all microarray experiment related data regardless if analysis tools for specific techniques or data formats are readily available. The BASE team is committed to continue improving and extending BASE to make it usable for even more experimental setups and techniques, and we encourage other groups to target their specific needs leveraging on the infrastructure provided by BASE. BASE is a comprehensive management application for information, data, and analysis of microarray experiments, available as free open source software at http://base.thep.lu.se under the terms of the GPLv3 license.

  5. Multi-test decision tree and its application to microarray data classification.

    Science.gov (United States)

    Czajkowski, Marcin; Grześ, Marek; Kretowski, Marek

    2014-05-01

    The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 14 datasets by an average 6%. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Improvement in the amine glass platform by bubbling method for a DNA microarray.

    Science.gov (United States)

    Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo

    2015-01-01

    A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool.

  7. Predicting incomplete gene microarray data with the use of supervised learning algorithms

    CSIR Research Space (South Africa)

    Twala, B

    2010-10-01

    Full Text Available that prediction using supervised learning can be improved in probabilistic terms given incomplete microarray data. This imputation approach is based on the a priori probability of each value determined from the instances at that node of a decision tree (PDT...

  8. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  9. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships.

    Science.gov (United States)

    Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong

    2010-01-18

    The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.

  10. Improvement in the amine glass platform by bubbling method for a DNA microarray

    Directory of Open Access Journals (Sweden)

    Jee SH

    2015-10-01

    Full Text Available Seung Hyun Jee,1 Jong Won Kim,2 Ji Hyeong Lee,2 Young Soo Yoon11Department of Chemical and Biological Engineering, Gachon University, Seongnam, Gyeonggi, Republic of Korea; 2Genomics Clinical Research Institute, LabGenomics Co., Ltd., Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of KoreaAbstract: A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool. Keywords: DNA microarray, glass platform, bubbling method, self-assambled monolayer

  11. A Bayesian decision procedure for testing multiple hypotheses in DNA microarray experiments.

    Science.gov (United States)

    Gómez-Villegas, Miguel A; Salazar, Isabel; Sanz, Luis

    2014-02-01

    DNA microarray experiments require the use of multiple hypothesis testing procedures because thousands of hypotheses are simultaneously tested. We deal with this problem from a Bayesian decision theory perspective. We propose a decision criterion based on an estimation of the number of false null hypotheses (FNH), taking as an error measure the proportion of the posterior expected number of false positives with respect to the estimated number of true null hypotheses. The methodology is applied to a Gaussian model when testing bilateral hypotheses. The procedure is illustrated with both simulated and real data examples and the results are compared to those obtained by the Bayes rule when an additive loss function is considered for each joint action and the generalized loss 0-1 function for each individual action. Our procedure significantly reduced the percentage of false negatives whereas the percentage of false positives remains at an acceptable level.

  12. Assessing Bacterial Interactions Using Carbohydrate-Based Microarrays

    Directory of Open Access Journals (Sweden)

    Andrea Flannery

    2015-12-01

    Full Text Available Carbohydrates play a crucial role in host-microorganism interactions and many host glycoconjugates are receptors or co-receptors for microbial binding. Host glycosylation varies with species and location in the body, and this contributes to species specificity and tropism of commensal and pathogenic bacteria. Additionally, bacterial glycosylation is often the first bacterial molecular species encountered and responded to by the host system. Accordingly, characterising and identifying the exact structures involved in these critical interactions is an important priority in deciphering microbial pathogenesis. Carbohydrate-based microarray platforms have been an underused tool for screening bacterial interactions with specific carbohydrate structures, but they are growing in popularity in recent years. In this review, we discuss carbohydrate-based microarrays that have been profiled with whole bacteria, recombinantly expressed adhesins or serum antibodies. Three main types of carbohydrate-based microarray platform are considered; (i conventional carbohydrate or glycan microarrays; (ii whole mucin microarrays; and (iii microarrays constructed from bacterial polysaccharides or their components. Determining the nature of the interactions between bacteria and host can help clarify the molecular mechanisms of carbohydrate-mediated interactions in microbial pathogenesis, infectious disease and host immune response and may lead to new strategies to boost therapeutic treatments.

  13. Sensitivity and fidelity of DNA microarray improved with integration of Amplified Differential Gene Expression (ADGE

    Directory of Open Access Journals (Sweden)

    Ile Kristina E

    2003-07-01

    Full Text Available Abstract Background The ADGE technique is a method designed to magnify the ratios of gene expression before detection. It improves the detection sensitivity to small change of gene expression and requires small amount of starting material. However, the throughput of ADGE is low. We integrated ADGE with DNA microarray (ADGE microarray and compared it with regular microarray. Results When ADGE was integrated with DNA microarray, a quantitative relationship of a power function between detected and input ratios was found. Because of ratio magnification, ADGE microarray was better able to detect small changes in gene expression in a drug resistant model cell line system. The PCR amplification of templates and efficient labeling reduced the requirement of starting material to as little as 125 ng of total RNA for one slide hybridization and enhanced the signal intensity. Integration of ratio magnification, template amplification and efficient labeling in ADGE microarray reduced artifacts in microarray data and improved detection fidelity. The results of ADGE microarray were less variable and more reproducible than those of regular microarray. A gene expression profile generated with ADGE microarray characterized the drug resistant phenotype, particularly with reference to glutathione, proliferation and kinase pathways. Conclusion ADGE microarray magnified the ratios of differential gene expression in a power function, improved the detection sensitivity and fidelity and reduced the requirement for starting material while maintaining high throughput. ADGE microarray generated a more informative expression pattern than regular microarray.

  14. The IronChip evaluation package: a package of perl modules for robust analysis of custom microarrays

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2010-03-01

    Full Text Available Abstract Background Gene expression studies greatly contribute to our understanding of complex relationships in gene regulatory networks. However, the complexity of array design, production and manipulations are limiting factors, affecting data quality. The use of customized DNA microarrays improves overall data quality in many situations, however, only if for these specifically designed microarrays analysis tools are available. Results The IronChip Evaluation Package (ICEP is a collection of Perl utilities and an easy to use data evaluation pipeline for the analysis of microarray data with a focus on data quality of custom-designed microarrays. The package has been developed for the statistical and bioinformatical analysis of the custom cDNA microarray IronChip but can be easily adapted for other cDNA or oligonucleotide-based designed microarray platforms. ICEP uses decision tree-based algorithms to assign quality flags and performs robust analysis based on chip design properties regarding multiple repetitions, ratio cut-off, background and negative controls. Conclusions ICEP is a stand-alone Windows application to obtain optimal data quality from custom-designed microarrays and is freely available here (see "Additional Files" section and at: http://www.alice-dsl.net/evgeniy.vainshtein/ICEP/

  15. Microarray-based screening of heat shock protein inhibitors.

    Science.gov (United States)

    Schax, Emilia; Walter, Johanna-Gabriela; Märzhäuser, Helene; Stahl, Frank; Scheper, Thomas; Agard, David A; Eichner, Simone; Kirschning, Andreas; Zeilinger, Carsten

    2014-06-20

    Based on the importance of heat shock proteins (HSPs) in diseases such as cancer, Alzheimer's disease or malaria, inhibitors of these chaperons are needed. Today's state-of-the-art techniques to identify HSP inhibitors are performed in microplate format, requiring large amounts of proteins and potential inhibitors. In contrast, we have developed a miniaturized protein microarray-based assay to identify novel inhibitors, allowing analysis with 300 pmol of protein. The assay is based on competitive binding of fluorescence-labeled ATP and potential inhibitors to the ATP-binding site of HSP. Therefore, the developed microarray enables the parallel analysis of different ATP-binding proteins on a single microarray. We have demonstrated the possibility of multiplexing by immobilizing full-length human HSP90α and HtpG of Helicobacter pylori on microarrays. Fluorescence-labeled ATP was competed by novel geldanamycin/reblastatin derivatives with IC50 values in the range of 0.5 nM to 4 μM and Z(*)-factors between 0.60 and 0.96. Our results demonstrate the potential of a target-oriented multiplexed protein microarray to identify novel inhibitors for different members of the HSP90 family. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Informed Decision-Making in the Context of Prenatal Chromosomal Microarray.

    Science.gov (United States)

    Baker, Jessica; Shuman, Cheryl; Chitayat, David; Wasim, Syed; Okun, Nan; Keunen, Johannes; Hofstedter, Renee; Silver, Rachel

    2018-03-07

    The introduction of chromosomal microarray (CMA) into the prenatal setting has involved considerable deliberation due to the wide range of possible outcomes (e.g., copy number variants of uncertain clinical significance). Such issues are typically discussed in pre-test counseling for pregnant women to support informed decision-making regarding prenatal testing options. This research study aimed to assess the level of informed decision-making with respect to prenatal CMA and the factor(s) influencing decision-making to accept CMA for the selected prenatal testing procedure (i.e., chorionic villus sampling or amniocentesis). We employed a questionnaire that was adapted from a three-dimensional measure previously used to assess informed decision-making with respect to prenatal screening for Down syndrome and neural tube defects. This measure classifies an informed decision as one that is knowledgeable, value-consistent, and deliberated. Our questionnaire also included an optional open-ended question, soliciting factors that may have influenced the participants' decision to accept prenatal CMA; these responses were analyzed qualitatively. Data analysis on 106 participants indicated that 49% made an informed decision (i.e., meeting all three criteria of knowledgeable, deliberated, and value-consistent). Analysis of 59 responses to the open-ended question showed that "the more information the better" emerged as the dominant factor influencing both informed and uninformed participants' decisions to accept prenatal CMA. Despite learning about the key issues in pre-test genetic counseling, our study classified a significant portion of women as making uninformed decisions due to insufficient knowledge, lack of deliberation, value-inconsistency, or a combination of these three measures. Future efforts should focus on developing educational approaches and counseling strategies to effectively increase the rate of informed decision-making among women offered prenatal CMA.

  17. Cell-Based Microarrays for In Vitro Toxicology

    Science.gov (United States)

    Wegener, Joachim

    2015-07-01

    DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.

  18. Big-Data Based Decision-Support Systems to Improve Clinicians' Cognition.

    Science.gov (United States)

    Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin

    2016-01-01

    Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians' cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems.

  19. Data-based decision making for instructional improvement in primary education

    NARCIS (Netherlands)

    Gelderblom, Gerrit; Schildkamp, Kim; Pieters, Julius Marie; Ehren, Melanie Catharina Margaretha

    2016-01-01

    Data-based decision making can help teachers improve their instruction. Research shows that instruction has a strong impact on students' learning outcomes. This study investigates whether Dutch primary school teachers use data to improve their instruction. Four aspects of instruction were

  20. Expert Knowledge Influences Decision-Making for Couples Receiving Positive Prenatal Chromosomal Microarray Testing Results.

    Science.gov (United States)

    Rubel, M A; Werner-Lin, A; Barg, F K; Bernhardt, B A

    2017-09-01

    To assess how participants receiving abnormal prenatal genetic testing results seek information and understand the implications of results, 27 US female patients and 12 of their male partners receiving positive prenatal microarray testing results completed semi-structured phone interviews. These interviews documented participant experiences with chromosomal microarray testing, understanding of and emotional response to receiving results, factors affecting decision-making about testing and pregnancy termination, and psychosocial needs throughout the testing process. Interview data were analyzed using a modified grounded theory approach. In the absence of certainty about the implications of results, understanding of results is shaped by biomedical expert knowledge (BEK) and cultural expert knowledge (CEK). When there is a dearth of BEK, as in the case of receiving results of uncertain significance, participants rely on CEK, including religious/spiritual beliefs, "gut instinct," embodied knowledge, and social network informants. CEK is a powerful platform to guide understanding of prenatal genetic testing results. The utility of culturally situated expert knowledge during testing uncertainty emphasizes that decision-making occurs within discourses beyond the biomedical domain. These forms of "knowing" may be integrated into clinical consideration of efficacious patient assessment and counseling.

  1. Big-Data Based Decision-Support Systems to Improve Clinicians’ Cognition

    Science.gov (United States)

    Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin

    2016-01-01

    Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians’ cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems. PMID:27990498

  2. A Java-based tool for the design of classification microarrays.

    Science.gov (United States)

    Meng, Da; Broschat, Shira L; Call, Douglas R

    2008-08-04

    Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for

  3. A Fisheye Viewer for microarray-based gene expression data.

    Science.gov (United States)

    Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V

    2006-10-13

    Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface--an electronic table (E-table) that uses fisheye distortion technology. The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site http://polaris.imt.uwm.edu:7777/fisheye/. The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table.

  4. A Java-based tool for the design of classification microarrays

    Directory of Open Access Journals (Sweden)

    Broschat Shira L

    2008-08-01

    Full Text Available Abstract Background Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. Results The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. Conclusion In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays–and mixed-plasmid microarrays in particular–it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm, several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text, and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff. Weights

  5. Microintaglio Printing for Soft Lithography-Based in Situ Microarrays

    Directory of Open Access Journals (Sweden)

    Manish Biyani

    2015-07-01

    Full Text Available Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called “microintaglio printing technology”, for large-scale bio-microarray fabrication using a microreactor array (µRA-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density, ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era.

  6. Improving cluster-based missing value estimation of DNA microarray data.

    Science.gov (United States)

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  7. Improved porous silicon (P-Si) microarray based PSA (prostate specific antigen) immunoassay by optimized surface density of the capture antibody

    Science.gov (United States)

    Lee, SangWook; Kim, Soyoun; Malm, Johan; Jeong, Ok Chan; Lilja, Hans; Laurell, Thomas

    2014-01-01

    Enriching the surface density of immobilized capture antibodies enhances the detection signal of antibody sandwich microarrays. In this study, we improved the detection sensitivity of our previously developed P-Si (porous silicon) antibody microarray by optimizing concentrations of the capturing antibody. We investigated immunoassays using a P-Si microarray at three different capture antibody (PSA - prostate specific antigen) concentrations, analyzing the influence of the antibody density on the assay detection sensitivity. The LOD (limit of detection) for PSA was 2.5ngmL−1, 80pgmL−1, and 800fgmL−1 when arraying the PSA antibody, H117 at the concentration 15µgmL−1, 35µgmL−1 and 154µgmL−1, respectively. We further investigated PSA spiked into human female serum in the range of 800fgmL−1 to 500ngmL−1. The microarray showed a LOD of 800fgmL−1 and a dynamic range of 800 fgmL−1 to 80ngmL−1 in serum spiked samples. PMID:24016590

  8. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  9. Hybrid Feature Selection Approach Based on GRASP for Cancer Microarray Data

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2017-01-01

    Full Text Available Microarray data usually contain a large number of genes, but a small number of samples. Feature subset selection for microarray data aims at reducing the number of genes so that useful information can be extracted from the samples. Reducing the dimension of data sets further helps in improving the computational efficiency of the learning model. In this paper, we propose a modified algorithm based on the tabu search as local search procedures to a Greedy Randomized Adaptive Search Procedure (GRASP for high dimensional microarray data sets. The proposed Tabu based Greedy Randomized Adaptive Search Procedure algorithm is named as TGRASP. In TGRASP, a new parameter has been introduced named as Tabu Tenure and the existing parameters, NumIter and size have been modified. We observed that different parameter settings affect the quality of the optimum. The second proposed algorithm known as FFGRASP (Firefly Greedy Randomized Adaptive Search Procedure uses a firefly optimization algorithm in the local search optimzation phase of the greedy randomized adaptive search procedure (GRASP. Firefly algorithm is one of the powerful algorithms for optimization of multimodal applications. Experimental results show that the proposed TGRASP and FFGRASP algorithms are much better than existing algorithm with respect to three performance parameters viz. accuracy, run time, number of a selected subset of features. We have also compared both the approaches with a unified metric (Extended Adjusted Ratio of Ratios which has shown that TGRASP approach outperforms existing approach for six out of nine cancer microarray datasets and FFGRASP performs better on seven out of nine datasets.

  10. A fisheye viewer for microarray-based gene expression data

    Directory of Open Access Journals (Sweden)

    Munson Ethan V

    2006-10-01

    Full Text Available Abstract Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site http://polaris.imt.uwm.edu:7777/fisheye/. The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table.

  11. Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.

    Science.gov (United States)

    Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias

    2015-06-25

    Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.

  12. Density based pruning for identification of differentially expressed genes from microarray data

    Directory of Open Access Journals (Sweden)

    Xu Jia

    2010-11-01

    Full Text Available Abstract Motivation Identification of differentially expressed genes from microarray datasets is one of the most important analyses for microarray data mining. Popular algorithms such as statistical t-test rank genes based on a single statistics. The false positive rate of these methods can be improved by considering other features of differentially expressed genes. Results We proposed a pattern recognition strategy for identifying differentially expressed genes. Genes are mapped to a two dimension feature space composed of average difference of gene expression and average expression levels. A density based pruning algorithm (DB Pruning is developed to screen out potential differentially expressed genes usually located in the sparse boundary region. Biases of popular algorithms for identifying differentially expressed genes are visually characterized. Experiments on 17 datasets from Gene Omnibus Database (GEO with experimentally verified differentially expressed genes showed that DB pruning can significantly improve the prediction accuracy of popular identification algorithms such as t-test, rank product, and fold change. Conclusions Density based pruning of non-differentially expressed genes is an effective method for enhancing statistical testing based algorithms for identifying differentially expressed genes. It improves t-test, rank product, and fold change by 11% to 50% in the numbers of identified true differentially expressed genes. The source code of DB pruning is freely available on our website http://mleg.cse.sc.edu/degprune

  13. Integrative missing value estimation for microarray data.

    Science.gov (United States)

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  14. DNA microarray-based PCR ribotyping of Clostridium difficile.

    Science.gov (United States)

    Schneeberg, Alexander; Ehricht, Ralf; Slickers, Peter; Baier, Vico; Neubauer, Heinrich; Zimmermann, Stefan; Rabold, Denise; Lübke-Becker, Antina; Seyboldt, Christian

    2015-02-01

    This study presents a DNA microarray-based assay for fast and simple PCR ribotyping of Clostridium difficile strains. Hybridization probes were designed to query the modularly structured intergenic spacer region (ISR), which is also the template for conventional and PCR ribotyping with subsequent capillary gel electrophoresis (seq-PCR) ribotyping. The probes were derived from sequences available in GenBank as well as from theoretical ISR module combinations. A database of reference hybridization patterns was set up from a collection of 142 well-characterized C. difficile isolates representing 48 seq-PCR ribotypes. The reference hybridization patterns calculated by the arithmetic mean were compared using a similarity matrix analysis. The 48 investigated seq-PCR ribotypes revealed 27 array profiles that were clearly distinguishable. The most frequent human-pathogenic ribotypes 001, 014/020, 027, and 078/126 were discriminated by the microarray. C. difficile strains related to 078/126 (033, 045/FLI01, 078, 126, 126/FLI01, 413, 413/FLI01, 598, 620, 652, and 660) and 014/020 (014, 020, and 449) showed similar hybridization patterns, confirming their genetic relatedness, which was previously reported. A panel of 50 C. difficile field isolates was tested by seq-PCR ribotyping and the DNA microarray-based assay in parallel. Taking into account that the current version of the microarray does not discriminate some closely related seq-PCR ribotypes, all isolates were typed correctly. Moreover, seq-PCR ribotypes without reference profiles available in the database (ribotype 009 and 5 new types) were correctly recognized as new ribotypes, confirming the performance and expansion potential of the microarray. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  15. Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray

    Directory of Open Access Journals (Sweden)

    Lan Shu

    2008-07-01

    Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLE’s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.

  16. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  17. Broad spectrum microarray for fingerprint-based bacterial species identification

    Directory of Open Access Journals (Sweden)

    Frey Jürg E

    2010-02-01

    Full Text Available Abstract Background Microarrays are powerful tools for DNA-based molecular diagnostics and identification of pathogens. Most target a limited range of organisms and are based on only one or a very few genes for specific identification. Such microarrays are limited to organisms for which specific probes are available, and often have difficulty discriminating closely related taxa. We have developed an alternative broad-spectrum microarray that employs hybridisation fingerprints generated by high-density anonymous markers distributed over the entire genome for identification based on comparison to a reference database. Results A high-density microarray carrying 95,000 unique 13-mer probes was designed. Optimized methods were developed to deliver reproducible hybridisation patterns that enabled confident discrimination of bacteria at the species, subspecies, and strain levels. High correlation coefficients were achieved between replicates. A sub-selection of 12,071 probes, determined by ANOVA and class prediction analysis, enabled the discrimination of all samples in our panel. Mismatch probe hybridisation was observed but was found to have no effect on the discriminatory capacity of our system. Conclusions These results indicate the potential of our genome chip for reliable identification of a wide range of bacterial taxa at the subspecies level without laborious prior sequencing and probe design. With its high resolution capacity, our proof-of-principle chip demonstrates great potential as a tool for molecular diagnostics of broad taxonomic groups.

  18. Integrative missing value estimation for microarray data

    Directory of Open Access Journals (Sweden)

    Zhou Xianghong

    2006-10-01

    Full Text Available Abstract Background Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. Results We present the integrative Missing Value Estimation method (iMISS by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS imputation algorithm by up to 15% improvement in our benchmark tests. Conclusion We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  19. Improving medical diagnosis reliability using Boosted C5.0 decision tree empowered by Particle Swarm Optimization.

    Science.gov (United States)

    Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin

    2015-08-01

    Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.

  20. PIIKA 2: an expanded, web-based platform for analysis of kinome microarray data.

    Directory of Open Access Journals (Sweden)

    Brett Trost

    Full Text Available Kinome microarrays are comprised of peptides that act as phosphorylation targets for protein kinases. This platform is growing in popularity due to its ability to measure phosphorylation-mediated cellular signaling in a high-throughput manner. While software for analyzing data from DNA microarrays has also been used for kinome arrays, differences between the two technologies and associated biologies previously led us to develop Platform for Intelligent, Integrated Kinome Analysis (PIIKA, a software tool customized for the analysis of data from kinome arrays. Here, we report the development of PIIKA 2, a significantly improved version with new features and improvements in the areas of clustering, statistical analysis, and data visualization. Among other additions to the original PIIKA, PIIKA 2 now allows the user to: evaluate statistically how well groups of samples cluster together; identify sets of peptides that have consistent phosphorylation patterns among groups of samples; perform hierarchical clustering analysis with bootstrapping; view false negative probabilities and positive and negative predictive values for t-tests between pairs of samples; easily assess experimental reproducibility; and visualize the data using volcano plots, scatterplots, and interactive three-dimensional principal component analyses. Also new in PIIKA 2 is a web-based interface, which allows users unfamiliar with command-line tools to easily provide input and download the results. Collectively, the additions and improvements described here enhance both the breadth and depth of analyses available, simplify the user interface, and make the software an even more valuable tool for the analysis of kinome microarray data. Both the web-based and stand-alone versions of PIIKA 2 can be accessed via http://saphire.usask.ca.

  1. PIIKA 2: an expanded, web-based platform for analysis of kinome microarray data.

    Science.gov (United States)

    Trost, Brett; Kindrachuk, Jason; Määttänen, Pekka; Napper, Scott; Kusalik, Anthony

    2013-01-01

    Kinome microarrays are comprised of peptides that act as phosphorylation targets for protein kinases. This platform is growing in popularity due to its ability to measure phosphorylation-mediated cellular signaling in a high-throughput manner. While software for analyzing data from DNA microarrays has also been used for kinome arrays, differences between the two technologies and associated biologies previously led us to develop Platform for Intelligent, Integrated Kinome Analysis (PIIKA), a software tool customized for the analysis of data from kinome arrays. Here, we report the development of PIIKA 2, a significantly improved version with new features and improvements in the areas of clustering, statistical analysis, and data visualization. Among other additions to the original PIIKA, PIIKA 2 now allows the user to: evaluate statistically how well groups of samples cluster together; identify sets of peptides that have consistent phosphorylation patterns among groups of samples; perform hierarchical clustering analysis with bootstrapping; view false negative probabilities and positive and negative predictive values for t-tests between pairs of samples; easily assess experimental reproducibility; and visualize the data using volcano plots, scatterplots, and interactive three-dimensional principal component analyses. Also new in PIIKA 2 is a web-based interface, which allows users unfamiliar with command-line tools to easily provide input and download the results. Collectively, the additions and improvements described here enhance both the breadth and depth of analyses available, simplify the user interface, and make the software an even more valuable tool for the analysis of kinome microarray data. Both the web-based and stand-alone versions of PIIKA 2 can be accessed via http://saphire.usask.ca.

  2. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    Directory of Open Access Journals (Sweden)

    Atiyeh Mortazavi

    2016-01-01

    Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  3. Video-based training to improve perceptual-cognitive decision-making performance of Australian football umpires.

    Science.gov (United States)

    Larkin, Paul; Mesagno, Christopher; Berry, Jason; Spittle, Michael; Harvey, Jack

    2018-02-01

    Decision-making is a central component of the in-game performance of Australian football umpires; however, current umpire training focuses largely on physiological development with decision-making skills development conducted via explicit lecture-style meetings with limited practice devoted to making actual decisions. Therefore, this study investigated the efficacy of a video-based training programme, aimed to provide a greater amount of contextualised visual experiences without explicit instruction, to improve decision-making skills of umpires. Australian football umpires (n = 52) were recruited from metropolitan and regional Division 1 competitions. Participants were randomly assigned to an intervention or control group and classified according to previous umpire game experience (i.e., experienced; less experienced). The intervention group completed a 12-week video-based decision-making training programme, with decision-making performance assessed at pre-training, and 1-week retention and 3-week retention periods. The control group did not complete any video-based training. Results indicated a significant Group (intervention; Control) × Test interaction (F(1, 100) = 3.98; P = 0.02, partial ῆ 2  = 0.074), with follow-up pairwise comparisons indicating significant within-group differences over time for the intervention group. In addition, decision-making performance of the less experienced umpires in the intervention group significantly improved (F(2, 40) = 5.03, P = 0.01, partial ῆ 2  = 0.201). Thus, video-based training programmes may be a viable adjunct to current training programmes to hasten decision-making development, especially for less experienced umpires.

  4. Facilitating RNA structure prediction with microarrays.

    Science.gov (United States)

    Kierzek, Elzbieta; Kierzek, Ryszard; Turner, Douglas H; Catrina, Irina E

    2006-01-17

    Determining RNA secondary structure is important for understanding structure-function relationships and identifying potential drug targets. This paper reports the use of microarrays with heptamer 2'-O-methyl oligoribonucleotides to probe the secondary structure of an RNA and thereby improve the prediction of that secondary structure. When experimental constraints from hybridization results are added to a free-energy minimization algorithm, the prediction of the secondary structure of Escherichia coli 5S rRNA improves from 27 to 92% of the known canonical base pairs. Optimization of buffer conditions for hybridization and application of 2'-O-methyl-2-thiouridine to enhance binding and improve discrimination between AU and GU pairs are also described. The results suggest that probing RNA with oligonucleotide microarrays can facilitate determination of secondary structure.

  5. Microarray MAPH: accurate array-based detection of relative copy number in genomic DNA

    Directory of Open Access Journals (Sweden)

    Chan Alan

    2006-06-01

    Full Text Available Abstract Background Current methods for measurement of copy number do not combine all the desirable qualities of convenience, throughput, economy, accuracy and resolution. In this study, to improve the throughput associated with Multiplex Amplifiable Probe Hybridisation (MAPH we aimed to develop a modification based on the 3-Dimensional, Flow-Through Microarray Platform from PamGene International. In this new method, electrophoretic analysis of amplified products is replaced with photometric analysis of a probed oligonucleotide array. Copy number analysis of hybridised probes is based on a dual-label approach by comparing the intensity of Cy3-labelled MAPH probes amplified from test samples co-hybridised with similarly amplified Cy5-labelled reference MAPH probes. The key feature of using a hybridisation-based end point with MAPH is that discrimination of amplified probes is based on sequence and not fragment length. Results In this study we showed that microarray MAPH measurement of PMP22 gene dosage correlates well with PMP22 gene dosage determined by capillary MAPH and that copy number was accurately reported in analyses of DNA from 38 individuals, 12 of which were known to have Charcot-Marie-Tooth disease type 1A (CMT1A. Conclusion Measurement of microarray-based endpoints for MAPH appears to be of comparable accuracy to electrophoretic methods, and holds the prospect of fully exploiting the potential multiplicity of MAPH. The technology has the potential to simplify copy number assays for genes with a large number of exons, or of expanded sets of probes from dispersed genomic locations.

  6. Microarray MAPH: accurate array-based detection of relative copy number in genomic DNA.

    Science.gov (United States)

    Gibbons, Brian; Datta, Parikkhit; Wu, Ying; Chan, Alan; Al Armour, John

    2006-06-30

    Current methods for measurement of copy number do not combine all the desirable qualities of convenience, throughput, economy, accuracy and resolution. In this study, to improve the throughput associated with Multiplex Amplifiable Probe Hybridisation (MAPH) we aimed to develop a modification based on the 3-Dimensional, Flow-Through Microarray Platform from PamGene International. In this new method, electrophoretic analysis of amplified products is replaced with photometric analysis of a probed oligonucleotide array. Copy number analysis of hybridised probes is based on a dual-label approach by comparing the intensity of Cy3-labelled MAPH probes amplified from test samples co-hybridised with similarly amplified Cy5-labelled reference MAPH probes. The key feature of using a hybridisation-based end point with MAPH is that discrimination of amplified probes is based on sequence and not fragment length. In this study we showed that microarray MAPH measurement of PMP22 gene dosage correlates well with PMP22 gene dosage determined by capillary MAPH and that copy number was accurately reported in analyses of DNA from 38 individuals, 12 of which were known to have Charcot-Marie-Tooth disease type 1A (CMT1A). Measurement of microarray-based endpoints for MAPH appears to be of comparable accuracy to electrophoretic methods, and holds the prospect of fully exploiting the potential multiplicity of MAPH. The technology has the potential to simplify copy number assays for genes with a large number of exons, or of expanded sets of probes from dispersed genomic locations.

  7. Microarrays for Universal Detection and Identification of Phytoplasmas

    DEFF Research Database (Denmark)

    Nicolaisen, Mogens; Nyskjold, Henriette; Bertaccini, Assunta

    2013-01-01

    Detection and identification of phytoplasmas is a laborious process often involving nested PCR followed by restriction enzyme analysis and fine-resolution gel electrophoresis. To improve throughput, other methods are needed. Microarray technology offers a generic assay that can potentially detect...... and differentiate all types of phytoplasmas in one assay. The present protocol describes a microarray-based method for identification of phytoplasmas to 16Sr group level....

  8. FiGS: a filter-based gene selection workbench for microarray data

    Directory of Open Access Journals (Sweden)

    Yun Taegyun

    2010-01-01

    Full Text Available Abstract Background The selection of genes that discriminate disease classes from microarray data is widely used for the identification of diagnostic biomarkers. Although various gene selection methods are currently available and some of them have shown excellent performance, no single method can retain the best performance for all types of microarray datasets. It is desirable to use a comparative approach to find the best gene selection result after rigorous test of different methodological strategies for a given microarray dataset. Results FiGS is a web-based workbench that automatically compares various gene selection procedures and provides the optimal gene selection result for an input microarray dataset. FiGS builds up diverse gene selection procedures by aligning different feature selection techniques and classifiers. In addition to the highly reputed techniques, FiGS diversifies the gene selection procedures by incorporating gene clustering options in the feature selection step and different data pre-processing options in classifier training step. All candidate gene selection procedures are evaluated by the .632+ bootstrap errors and listed with their classification accuracies and selected gene sets. FiGS runs on parallelized computing nodes that capacitate heavy computations. FiGS is freely accessible at http://gexp.kaist.ac.kr/figs. Conclusion FiGS is an web-based application that automates an extensive search for the optimized gene selection analysis for a microarray dataset in a parallel computing environment. FiGS will provide both an efficient and comprehensive means of acquiring optimal gene sets that discriminate disease states from microarray datasets.

  9. Microengineering methods for cell-based microarrays and high-throughput drug-screening applications

    International Nuclear Information System (INIS)

    Xu Feng; Wu Jinhui; Wang Shuqi; Gurkan, Umut Atakan; Demirci, Utkan; Durmus, Naside Gozde

    2011-01-01

    Screening for effective therapeutic agents from millions of drug candidates is costly, time consuming, and often faces concerns due to the extensive use of animals. To improve cost effectiveness, and to minimize animal testing in pharmaceutical research, in vitro monolayer cell microarrays with multiwell plate assays have been developed. Integration of cell microarrays with microfluidic systems has facilitated automated and controlled component loading, significantly reducing the consumption of the candidate compounds and the target cells. Even though these methods significantly increased the throughput compared to conventional in vitro testing systems and in vivo animal models, the cost associated with these platforms remains prohibitively high. Besides, there is a need for three-dimensional (3D) cell-based drug-screening models which can mimic the in vivo microenvironment and the functionality of the native tissues. Here, we present the state-of-the-art microengineering approaches that can be used to develop 3D cell-based drug-screening assays. We highlight the 3D in vitro cell culture systems with live cell-based arrays, microfluidic cell culture systems, and their application to high-throughput drug screening. We conclude that among the emerging microengineering approaches, bioprinting holds great potential to provide repeatable 3D cell-based constructs with high temporal, spatial control and versatility.

  10. Microengineering methods for cell-based microarrays and high-throughput drug-screening applications

    Energy Technology Data Exchange (ETDEWEB)

    Xu Feng; Wu Jinhui; Wang Shuqi; Gurkan, Umut Atakan; Demirci, Utkan [Department of Medicine, Demirci Bio-Acoustic-MEMS in Medicine (BAMM) Laboratory, Center for Biomedical Engineering, Brigham and Women' s Hospital, Harvard Medical School, Boston, MA (United States); Durmus, Naside Gozde, E-mail: udemirci@rics.bwh.harvard.edu [School of Engineering and Division of Biology and Medicine, Brown University, Providence, RI (United States)

    2011-09-15

    Screening for effective therapeutic agents from millions of drug candidates is costly, time consuming, and often faces concerns due to the extensive use of animals. To improve cost effectiveness, and to minimize animal testing in pharmaceutical research, in vitro monolayer cell microarrays with multiwell plate assays have been developed. Integration of cell microarrays with microfluidic systems has facilitated automated and controlled component loading, significantly reducing the consumption of the candidate compounds and the target cells. Even though these methods significantly increased the throughput compared to conventional in vitro testing systems and in vivo animal models, the cost associated with these platforms remains prohibitively high. Besides, there is a need for three-dimensional (3D) cell-based drug-screening models which can mimic the in vivo microenvironment and the functionality of the native tissues. Here, we present the state-of-the-art microengineering approaches that can be used to develop 3D cell-based drug-screening assays. We highlight the 3D in vitro cell culture systems with live cell-based arrays, microfluidic cell culture systems, and their application to high-throughput drug screening. We conclude that among the emerging microengineering approaches, bioprinting holds great potential to provide repeatable 3D cell-based constructs with high temporal, spatial control and versatility.

  11. Improved elucidation of biological processes linked to diabetic nephropathy by single probe-based microarray data analysis.

    Directory of Open Access Journals (Sweden)

    Clemens D Cohen

    Full Text Available BACKGROUND: Diabetic nephropathy (DN is a complex and chronic metabolic disease that evolves into a progressive fibrosing renal disorder. Effective transcriptomic profiling of slowly evolving disease processes such as DN can be problematic. The changes that occur are often subtle and can escape detection by conventional oligonucleotide DNA array analyses. METHODOLOGY/PRINCIPAL FINDINGS: We examined microdissected human renal tissue with or without DN using Affymetrix oligonucleotide microarrays (HG-U133A by standard Robust Multi-array Analysis (RMA. Subsequent gene ontology analysis by Database for Annotation, Visualization and Integrated Discovery (DAVID showed limited detection of biological processes previously identified as central mechanisms in the development of DN (e.g. inflammation and angiogenesis. This apparent lack of sensitivity may be associated with the gene-oriented averaging of oligonucleotide probe signals, as this includes signals from cross-hybridizing probes and gene annotation that is based on out of date genomic data. We then examined the same CEL file data using a different methodology to determine how well it could correlate transcriptomic data with observed biology. ChipInspector (CI is based on single probe analysis and de novo gene annotation that bypasses probe set definitions. Both methods, RMA and CI, used at default settings yielded comparable numbers of differentially regulated genes. However, when verified by RT-PCR, the single probe based analysis demonstrated reduced background noise with enhanced sensitivity and fewer false positives. CONCLUSIONS/SIGNIFICANCE: Using a single probe based analysis approach with de novo gene annotation allowed an improved representation of the biological processes linked to the development and progression of DN. The improved analysis was exemplified by the detection of Wnt signaling pathway activation in DN, a process not previously reported to be involved in this disease.

  12. A dynamic bead-based microarray for parallel DNA detection

    International Nuclear Information System (INIS)

    Sochol, R D; Lin, L; Casavant, B P; Dueck, M E; Lee, L P

    2011-01-01

    A microfluidic system has been designed and constructed by means of micromachining processes to integrate both microfluidic mixing of mobile microbeads and hydrodynamic microbead arraying capabilities on a single chip to simultaneously detect multiple bio-molecules. The prototype system has four parallel reaction chambers, which include microchannels of 18 × 50 µm 2 cross-sectional area and a microfluidic mixing section of 22 cm length. Parallel detection of multiple DNA oligonucleotide sequences was achieved via molecular beacon probes immobilized on polystyrene microbeads of 16 µm diameter. Experimental results show quantitative detection of three distinct DNA oligonucleotide sequences from the Hepatitis C viral (HCV) genome with single base-pair mismatch specificity. Our dynamic bead-based microarray offers an effective microfluidic platform to increase parallelization of reactions and improve microbead handling for various biological applications, including bio-molecule detection, medical diagnostics and drug screening

  13. Microarray-based ultra-high resolution discovery of genomic deletion mutations

    Science.gov (United States)

    2014-01-01

    Background Oligonucleotide microarray-based comparative genomic hybridization (CGH) offers an attractive possible route for the rapid and cost-effective genome-wide discovery of deletion mutations. CGH typically involves comparison of the hybridization intensities of genomic DNA samples with microarray chip representations of entire genomes, and has widespread potential application in experimental research and medical diagnostics. However, the power to detect small deletions is low. Results Here we use a graduated series of Arabidopsis thaliana genomic deletion mutations (of sizes ranging from 4 bp to ~5 kb) to optimize CGH-based genomic deletion detection. We show that the power to detect smaller deletions (4, 28 and 104 bp) depends upon oligonucleotide density (essentially the number of genome-representative oligonucleotides on the microarray chip), and determine the oligonucleotide spacings necessary to guarantee detection of deletions of specified size. Conclusions Our findings will enhance a wide range of research and clinical applications, and in particular will aid in the discovery of genomic deletions in the absence of a priori knowledge of their existence. PMID:24655320

  14. Neural signatures of experience-based improvements in deterministic decision-making

    OpenAIRE

    Tremel, Joshua J.; Laurent, Patryk A.; Wolk, David A.; Wheeler, Mark E.; Fiez, Julie A.

    2016-01-01

    Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across lear...

  15. THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL

    Science.gov (United States)

    Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...

  16. Parallel scan hyperspectral fluorescence imaging system and biomedical application for microarrays

    International Nuclear Information System (INIS)

    Liu Zhiyi; Ma Suihua; Liu Le; Guo Jihua; He Yonghong; Ji Yanhong

    2011-01-01

    Microarray research offers great potential for analysis of gene expression profile and leads to greatly improved experimental throughput. A number of instruments have been reported for microarray detection, such as chemiluminescence, surface plasmon resonance, and fluorescence markers. Fluorescence imaging is popular for the readout of microarrays. In this paper we develop a quasi-confocal, multichannel parallel scan hyperspectral fluorescence imaging system for microarray research. Hyperspectral imaging records the entire emission spectrum for every voxel within the imaged area in contrast to recording only fluorescence intensities of filter-based scanners. Coupled with data analysis, the recorded spectral information allows for quantitative identification of the contributions of multiple, spectrally overlapping fluorescent dyes and elimination of unwanted artifacts. The mechanism of quasi-confocal imaging provides a high signal-to-noise ratio, and parallel scan makes this approach a high throughput technique for microarray analysis. This system is improved with a specifically designed spectrometer which can offer a spectral resolution of 0.2 nm, and operates with spatial resolutions ranging from 2 to 30 μm . Finally, the application of the system is demonstrated by reading out microarrays for identification of bacteria.

  17. Evaluation of gene importance in microarray data based upon probability of selection

    Directory of Open Access Journals (Sweden)

    Fu Li M

    2005-03-01

    Full Text Available Abstract Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities.

  18. MicroArray Facility: a laboratory information management system with extended support for Nylon based technologies

    Directory of Open Access Journals (Sweden)

    Beaudoing Emmanuel

    2006-09-01

    Full Text Available Abstract Background High throughput gene expression profiling (GEP is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option. GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. Results MAF (MicroArray Facility is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking, data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. Conclusion MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for

  19. Development and Use of Integrated Microarray-Based Genomic Technologies for Assessing Microbial Community Composition and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    J. Zhou; S.-K. Rhee; C. Schadt; T. Gentry; Z. He; X. Li; X. Liu; J. Liebich; S.C. Chong; L. Wu

    2004-03-17

    different microbial communities and processes at the NABIR-FRC in Oak Ridge, TN. One project involves the monitoring of the development and dynamics of the microbial community of a fluidized bed reactor (FBR) used for reducing nitrate and the other project monitors microbial community responses to stimulation of uranium reducing populations via ethanol donor additions in situ and in a model system. Additionally, we are developing novel strategies for increasing microarray hybridization sensitivity. Finally, great improvements to our methods of probe design were made by the development of a new computer program, CommOligo. CommOligo designs unique and group-specific oligo probes for whole-genomes, metagenomes, and groups of environmental sequences and uses a new global alignment algorithm to design single or multiple probes for each gene or group. We are now using this program to design a more comprehensive functional gene array for environmental studies. Overall, our results indicate that the 50mer-based microarray technology has potential as a specific and quantitative tool to reveal the composition of microbial communities and their dynamics important to processes within contaminated environments.

  20. Neural signatures of experience-based improvements in deterministic decision-making.

    Science.gov (United States)

    Tremel, Joshua J; Laurent, Patryk A; Wolk, David A; Wheeler, Mark E; Fiez, Julie A

    2016-12-15

    Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across learning. Subjects learned correct choices to a set of 50 word pairs across eight repetitions of a concurrent discrimination task. Behavioral measures were then analyzed with both a drift-diffusion model and a reinforcement learning model. Parameter values from each were then used as fMRI regressors to identify regions whose activity fluctuates with specific cognitive processes described by the models. The patterns of intersecting neural effects across models support two main inferences about the influence of feedback on decision-making. First, frontal, anterior insular, fusiform, and caudate nucleus regions behave like performance monitors, reflecting errors in performance predictions that signal the need for changes in control over decision-making. Second, temporoparietal, supplementary motor, and putamen regions behave like mnemonic storage sites, reflecting differences in learned item values that inform optimal decision choices. As information about optimal choices is accrued, these neural systems dynamically adjust, likely shifting the burden of decision processing from controlled performance monitoring to bottom-up, stimulus-driven choice selection. Collectively, the results provide a detailed perspective on the fundamental ability to use past experiences to improve future decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Involving patients in care decisions improves satisfaction: an outcomes-based quality improvement project.

    Science.gov (United States)

    Leff, Ellen W

    2004-05-01

    A home care agency used quality improvement processes to improve patient satisfaction survey ratings. The focus was on involving patients in decisions about their care. A multidisciplinary team developed creative strategies to increase staff awareness and enhance customer service skills, which had dramatic results.

  2. Does improved decision-making ability reduce the physiological demands of game-based activities in field sport athletes?

    Science.gov (United States)

    Gabbett, Tim J; Carius, Josh; Mulvey, Mike

    2008-11-01

    This study investigated the effects of video-based perceptual training on pattern recognition and pattern prediction ability in elite field sport athletes and determined whether enhanced perceptual skills influenced the physiological demands of game-based activities. Sixteen elite women soccer players (mean +/- SD age, 18.3 +/- 2.8 years) were allocated to either a video-based perceptual training group (N = 8) or a control group (N = 8). The video-based perceptual training group watched video footage of international women's soccer matches. Twelve training sessions, each 15 minutes in duration, were conducted during a 4-week period. Players performed assessments of speed (5-, 10-, and 20-m sprint), repeated-sprint ability (6 x 20-m sprints, with active recovery on a 15-second cycle), estimated maximal aerobic power (V O2 max, multistage fitness test), and a game-specific video-based perceptual test of pattern recognition and pattern prediction before and after the 4 weeks of video-based perceptual training. The on-field assessments included time-motion analysis completed on all players during a standardized 45-minute small-sided training game, and assessments of passing, shooting, and dribbling decision-making ability. No significant changes were detected in speed, repeated-sprint ability, or estimated V O2 max during the training period. However, video-based perceptual training improved decision accuracy and reduced the number of recall errors, indicating improved game awareness and decision-making ability. Importantly, the improvements in pattern recognition and prediction ability transferred to on-field improvements in passing, shooting, and dribbling decision-making skills. No differences were detected between groups for the time spent standing, walking, jogging, striding, and sprinting during the small-sided training game. These findings demonstrate that video-based perceptual training can be used effectively to enhance the decision-making ability of field

  3. Integration of microarray analysis into the clinical diagnosis of hematological malignancies: How much can we improve cytogenetic testing?

    Science.gov (United States)

    Peterson, Jess F.; Aggarwal, Nidhi; Smith, Clayton A.; Gollin, Susanne M.; Surti, Urvashi; Rajkovic, Aleksandar; Swerdlow, Steven H.; Yatsenko, Svetlana A.

    2015-01-01

    Purpose To evaluate the clinical utility, diagnostic yield and rationale of integrating microarray analysis in the clinical diagnosis of hematological malignancies in comparison with classical chromosome karyotyping/fluorescence in situ hybridization (FISH). Methods G-banded chromosome analysis, FISH and microarray studies using customized CGH and CGH+SNP designs were performed on 27 samples from patients with hematological malignancies. A comprehensive comparison of the results obtained by three methods was conducted to evaluate benefits and limitations of these techniques for clinical diagnosis. Results Overall, 89.7% of chromosomal abnormalities identified by karyotyping/FISH studies were also detectable by microarray. Among 183 acquired copy number alterations (CNAs) identified by microarray, 94 were additional findings revealed in 14 cases (52%), and at least 30% of CNAs were in genomic regions of diagnostic/prognostic significance. Approximately 30% of novel alterations detected by microarray were >20 Mb in size. Balanced abnormalities were not detected by microarray; however, of the 19 apparently “balanced” rearrangements, 55% (6/11) of recurrent and 13% (1/8) of non-recurrent translocations had alterations at the breakpoints discovered by microarray. Conclusion Microarray technology enables accurate, cost-effective and time-efficient whole-genome analysis at a resolution significantly higher than that of conventional karyotyping and FISH. Array-CGH showed advantage in identification of cryptic imbalances and detection of clonal aberrations in population of non-dividing cancer cells and samples with poor chromosome morphology. The integration of microarray analysis into the cytogenetic diagnosis of hematologic malignancies has the potential to improve patient management by providing clinicians with additional disease specific and potentially clinically actionable genomic alterations. PMID:26299921

  4. Current Knowledge on Microarray Technology - An Overview

    African Journals Online (AJOL)

    Erah

    This paper reviews basics and updates of each microarray technology and serves to .... through protein microarrays. Protein microarrays also known as protein chips are nothing but grids that ... conditioned media, patient sera, plasma and urine. Clontech ... based antibody arrays) is similar to membrane-based antibody ...

  5. Development and evaluation of a high-throughput, low-cost genotyping platform based on oligonucleotide microarrays in rice

    Directory of Open Access Journals (Sweden)

    Liu Bin

    2008-05-01

    Full Text Available Abstract Background We report the development of a microarray platform for rapid and cost-effective genetic mapping, and its evaluation using rice as a model. In contrast to methods employing whole-genome tiling microarrays for genotyping, our method is based on low-cost spotted microarray production, focusing only on known polymorphic features. Results We have produced a genotyping microarray for rice, comprising 880 single feature polymorphism (SFP elements derived from insertions/deletions identified by aligning genomic sequences of the japonica cultivar Nipponbare and the indica cultivar 93-11. The SFPs were experimentally verified by hybridization with labeled genomic DNA prepared from the two cultivars. Using the genotyping microarrays, we found high levels of polymorphism across diverse rice accessions, and were able to classify all five subpopulations of rice with high bootstrap support. The microarrays were used for mapping of a gene conferring resistance to Magnaporthe grisea, the causative organism of rice blast disease, by quantitative genotyping of samples from a recombinant inbred line population pooled by phenotype. Conclusion We anticipate this microarray-based genotyping platform, based on its low cost-per-sample, to be particularly useful in applications requiring whole-genome molecular marker coverage across large numbers of individuals.

  6. SoFoCles: feature filtering for microarray classification based on gene ontology.

    Science.gov (United States)

    Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A

    2010-02-01

    Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

  7. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  8. Carbohydrate microarrays

    DEFF Research Database (Denmark)

    Park, Sungjin; Gildersleeve, Jeffrey C; Blixt, Klas Ola

    2012-01-01

    In the last decade, carbohydrate microarrays have been core technologies for analyzing carbohydrate-mediated recognition events in a high-throughput fashion. A number of methods have been exploited for immobilizing glycans on the solid surface in a microarray format. This microarray...... of substrate specificities of glycosyltransferases. This review covers the construction of carbohydrate microarrays, detection methods of carbohydrate microarrays and their applications in biological and biomedical research....

  9. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  10. Ontology-based, Tissue MicroArray oriented, image centered tissue bank

    Directory of Open Access Journals (Sweden)

    Viti Federica

    2008-04-01

    Full Text Available Abstract Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes.

  11. Direct calibration of PICKY-designed microarrays

    Directory of Open Access Journals (Sweden)

    Ronald Pamela C

    2009-10-01

    Full Text Available Abstract Background Few microarrays have been quantitatively calibrated to identify optimal hybridization conditions because it is difficult to precisely determine the hybridization characteristics of a microarray using biologically variable cDNA samples. Results Using synthesized samples with known concentrations of specific oligonucleotides, a series of microarray experiments was conducted to evaluate microarrays designed by PICKY, an oligo microarray design software tool, and to test a direct microarray calibration method based on the PICKY-predicted, thermodynamically closest nontarget information. The complete set of microarray experiment results is archived in the GEO database with series accession number GSE14717. Additional data files and Perl programs described in this paper can be obtained from the website http://www.complex.iastate.edu under the PICKY Download area. Conclusion PICKY-designed microarray probes are highly reliable over a wide range of hybridization temperatures and sample concentrations. The microarray calibration method reported here allows researchers to experimentally optimize their hybridization conditions. Because this method is straightforward, uses existing microarrays and relatively inexpensive synthesized samples, it can be used by any lab that uses microarrays designed by PICKY. In addition, other microarrays can be reanalyzed by PICKY to obtain the thermodynamically closest nontarget information for calibration.

  12. Microarray Я US: a user-friendly graphical interface to Bioconductor tools that enables accurate microarray data analysis and expedites comprehensive functional analysis of microarray results.

    Science.gov (United States)

    Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu

    2012-06-08

    Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.

  13. A DNA microarray-based methylation-sensitive (MS)-AFLP hybridization method for genetic and epigenetic analyses.

    Science.gov (United States)

    Yamamoto, F; Yamamoto, M

    2004-07-01

    We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.

  14. ELISA-BASE: an integrated bioinformatics tool for analyzing and tracking ELISA microarray data

    OpenAIRE

    White, Amanda M.; Collett, James R.; Seurynck-Servoss, Shannon L.; Daly, Don S.; Zangar, Richard C.

    2009-01-01

    Summary:ELISA-BASE is an open source database for capturing, organizing and analyzing enzyme-linked immunosorbent assay (ELISA) microarray data. ELISA-BASE is an extension of the BioArray Software Environment (BASE) database system.

  15. GEPAS, a web-based tool for microarray data analysis and interpretation

    Science.gov (United States)

    Tárraga, Joaquín; Medina, Ignacio; Carbonell, José; Huerta-Cepas, Jaime; Minguez, Pablo; Alloza, Eva; Al-Shahrour, Fátima; Vegas-Azcárate, Susana; Goetz, Stefan; Escobar, Pablo; Garcia-Garcia, Francisco; Conesa, Ana; Montaner, David; Dopazo, Joaquín

    2008-01-01

    Gene Expression Profile Analysis Suite (GEPAS) is one of the most complete and extensively used web-based packages for microarray data analysis. During its more than 5 years of activity it has continuously been updated to keep pace with the state-of-the-art in the changing microarray data analysis arena. GEPAS offers diverse analysis options that include well established as well as novel algorithms for normalization, gene selection, class prediction, clustering and functional profiling of the experiment. New options for time-course (or dose-response) experiments, microarray-based class prediction, new clustering methods and new tests for differential expression have been included. The new pipeliner module allows automating the execution of sequential analysis steps by means of a simple but powerful graphic interface. An extensive re-engineering of GEPAS has been carried out which includes the use of web services and Web 2.0 technology features, a new user interface with persistent sessions and a new extended database of gene identifiers. GEPAS is nowadays the most quoted web tool in its field and it is extensively used by researchers of many countries and its records indicate an average usage rate of 500 experiments per day. GEPAS, is available at http://www.gepas.org. PMID:18508806

  16. Development, characterization and experimental validation of a cultivated sunflower (Helianthus annuus L.) gene expression oligonucleotide microarray.

    Science.gov (United States)

    Fernandez, Paula; Soria, Marcelo; Blesa, David; DiRienzo, Julio; Moschen, Sebastian; Rivarola, Maximo; Clavijo, Bernardo Jose; Gonzalez, Sergio; Peluffo, Lucila; Príncipi, Dario; Dosio, Guillermo; Aguirrezabal, Luis; García-García, Francisco; Conesa, Ana; Hopp, Esteban; Dopazo, Joaquín; Heinz, Ruth Amelia; Paniego, Norma

    2012-01-01

    Oligonucleotide-based microarrays with accurate gene coverage represent a key strategy for transcriptional studies in orphan species such as sunflower, H. annuus L., which lacks full genome sequences. The goal of this study was the development and functional annotation of a comprehensive sunflower unigene collection and the design and validation of a custom sunflower oligonucleotide-based microarray. A large scale EST (>130,000 ESTs) curation, assembly and sequence annotation was performed using Blast2GO (www.blast2go.de). The EST assembly comprises 41,013 putative transcripts (12,924 contigs and 28,089 singletons). The resulting Sunflower Unigen Resource (SUR version 1.0) was used to design an oligonucleotide-based Agilent microarray for cultivated sunflower. This microarray includes a total of 42,326 features: 1,417 Agilent controls, 74 control probes for sunflower replicated 10 times (740 controls) and 40,169 different non-control probes. Microarray performance was validated using a model experiment examining the induction of senescence by water deficit. Pre-processing and differential expression analysis of Agilent microarrays was performed using the Bioconductor limma package. The analyses based on p-values calculated by eBayes (psunflower unigene collection, and a custom, validated sunflower oligonucleotide-based microarray using Agilent technology. Both the curated unigene collection and the validated oligonucleotide microarray provide key resources for sunflower genome analysis, transcriptional studies, and molecular breeding for crop improvement.

  17. Evaluation of chronic lymphocytic leukemia by BAC-based microarray analysis

    Directory of Open Access Journals (Sweden)

    McDaniel Lisa D

    2011-02-01

    Full Text Available Abstract Background Chronic lymphocytic leukemia (CLL is a highly variable disease with life expectancies ranging from months to decades. Cytogenetic findings play an integral role in defining the prognostic significance and treatment for individual patients. Results We have evaluated 25 clinical cases from a tertiary cancer center that have an established diagnosis of CLL and for which there was prior cytogenetic and/or fluorescence in situ hybridization (FISH data. We performed microarray-based comparative genomic hybridization (aCGH using a bacterial artificial chromosome (BAC-based microarray designed for the detection of known constitutional genetic syndromes. In 15 of the 25 cases, aCGH detected all copy number imbalances identified by prior cytogenetic and/or FISH studies. For the majority of those not detected, the aberrations were present at low levels of mosaicism. Furthermore, for 15 of the 25 cases, additional abnormalities were detected. Four of those cases had deletions that mapped to intervals implicated in inherited predisposition to CLL. For most cases, aCGH was able to detect abnormalities present in as few as 10% of cells. Although changes in ploidy are not easily discernable by aCGH, results for two cases illustrate the detection of additional copy gains and losses present within a mosaic tetraploid cell population. Conclusions Our results illustrate the successful evaluation of CLL using a microarray optimized for the interrogation of inherited disorders and the identification of alterations with possible relevance to CLL susceptibility.

  18. Improving comparability between microarray probe signals by thermodynamic intensity correction

    DEFF Research Database (Denmark)

    Bruun, G. M.; Wernersson, Rasmus; Juncker, Agnieszka

    2007-01-01

    different probes. It is therefore of great interest to correct for the variation between probes. Much of this variation is sequence dependent. We demonstrate that a thermodynamic model for hybridization of either DNA or RNA to a DNA microarray, which takes the sequence-dependent probe affinities...... determination of transcription start sites for a subset of yeast genes. In another application, we identify present/absent calls for probes hybridized to the sequenced Escherichia coli strain O157:H7 EDL933. The model improves the correct calls from 85 to 95% relative to raw intensity measures. The model thus...... makes applications which depend on comparisons between probes aimed at different sections of the same target more reliable....

  19. Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data.

    Science.gov (United States)

    Shah, M; Marchand, M; Corbeil, J

    2012-01-01

    One of the objectives of designing feature selection learning algorithms is to obtain classifiers that depend on a small number of attributes and have verifiable future performance guarantees. There are few, if any, approaches that successfully address the two goals simultaneously. To the best of our knowledge, such algorithms that give theoretical bounds on the future performance have not been proposed so far in the context of the classification of gene expression data. In this work, we investigate the premise of learning a conjunction (or disjunction) of decision stumps in Occam's Razor, Sample Compression, and PAC-Bayes learning settings for identifying a small subset of attributes that can be used to perform reliable classification tasks. We apply the proposed approaches for gene identification from DNA microarray data and compare our results to those of the well-known successful approaches proposed for the task. We show that our algorithm not only finds hypotheses with a much smaller number of genes while giving competitive classification accuracy but also having tight risk guarantees on future performance, unlike other approaches. The proposed approaches are general and extensible in terms of both designing novel algorithms and application to other domains.

  20. Improved learning in U.S. history and decision competence with decision-focused curriculum.

    Directory of Open Access Journals (Sweden)

    David Jacobson

    Full Text Available Decision making is rarely taught in high school, even though improved decision skills could benefit young people facing life-shaping decisions. While decision competence has been shown to correlate with better life outcomes, few interventions designed to improve decision skills have been evaluated with rigorous quantitative measures. A randomized study showed that integrating decision making into U.S. history instruction improved students' history knowledge and decision-making competence, compared to traditional history instruction. Thus, integrating decision training enhanced academic performance and improved an important, general life skill associated with improved life outcomes.

  1. An improved K-means clustering method for cDNA microarray image segmentation.

    Science.gov (United States)

    Wang, T N; Li, T J; Shao, G F; Wu, S X

    2015-07-14

    Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.

  2. Identification of listeria species isolated in Tunisia by Microarray based assay : results of a preliminary study

    International Nuclear Information System (INIS)

    Hmaied, Fatma; Helel, Salma; Barkallah, Insaf; Leberre, V.; Francois, J.M.; Kechrid, A.

    2008-01-01

    Microarray-based assay is a new molecular approach for genetic screening and identification of microorganisms. We have developed a rapid microarray-based assay for the reliable detection and discrimination of Listeria spp. in food and clinical isolates from Tunisia. The method used in the present study is based on the PCR amplification of a virulence factor gene (iap gene). the PCR mixture contained cyanine Cy5labeled dCTP. Therefore, The PCR products were fluorescently labeled. The presence of multiple species-specific sequences within the iap gene enabled us to design different oligoprobes per species. The species-specific sequences of the iap gene used in this study were obtained from genBank and then aligned for phylogenetic analysis in order to identify and retrieve the sequences of homologues of the amplified iap gene analysed. 20 probes were used for detection and identification of 22 food isolates and clinical isolates of Listeria spp (L. monocytogenes, L. ivanovi), L. welshimeri, L. seeligeri, and L. grayi). Each bacterial gene was identified by hybridization to oligoprobes specific for each Listeria species and immobilized on a glass surface. The microarray analysis showed that 5 clinical isolates and 2 food isolates were identified listeria monocytogenes. Concerning the remaining 15 food isolates; 13 were identified listeria innocua and 2 isolates could not be identified by microarray based assay. Further phylogenetic and molecular analysis are required to design more species-specific probes for the identification of Listeria spp. Microarray-based assay is a simple and rapid method used for Listeria species discrimination

  3. Improving societal acceptance of rad waste management policy decisions: an approach based on complex intelligence

    International Nuclear Information System (INIS)

    Rao, Suman

    2008-01-01

    In today's context elaborate public participation exercises are conducted around the world to elicit and incorporate societal risk perceptions into nuclear policy Decision-Making. However, on many occasions, such as in the case of rad waste management, the society remains unconvinced about these decisions. This naturally leads to the questions: are techniques for incorporating societal risk perceptions into the rad waste policy decision making processes sufficiently mature? How could societal risk perceptions and legal normative principles be better integrated in order to render the decisions more equitable and convincing to society? Based on guidance from socio-psychological research this paper postulates that a critical factor for gaining/improving societal acceptance is the quality and adequacy of criteria for option evaluation that are used in the policy decision making. After surveying three rad waste public participation cases, the paper identifies key lacunae in criteria abstraction processes as currently practiced. A new policy decision support model CIRDA: Complex Intelligent Risk Discourse Abstraction model that is based on the heuristic of Risk-Risk Analysis is proposed to overcome these lacunae. CIRDA's functionality of rad waste policy decision making is modelled as a policy decision-making Abstract Intelligent Agent and the agent program/abstraction mappings are presented. CIRDA is then applied to a live (U.K.) rad waste management case and the advantages of this method as compared to the Value Tree Method as practiced in the GB case are demonstrated. (author)

  4. Design of an Enterobacteriaceae Pan-genome Microarray Chip

    DEFF Research Database (Denmark)

    Lukjancenko, Oksana; Ussery, David

    2010-01-01

    -density microarray chip has been designed, using 116 Enterobacteriaceae genome sequences, taking into account the enteric pan-genome. Probes for the microarray were checked in silico and performance of the chip, based on experimental strains from four different genera, demonstrate a relatively high ability...... to distinguish those strains on genus, species, and pathotype/serovar levels. Additionally, the microarray performed well when investigating which genes were found in a given strain of interest. The Enterobacteriaceae pan-genome microarray, based on 116 genomes, provides a valuable tool for determination...

  5. Improving accountability in vaccine decision-making.

    Science.gov (United States)

    Timmis, James Kenneth; Black, Steven; Rappuoli, Rino

    2017-11-01

    Healthcare decisions, in particular those affecting entire populations, should be evidence-based and taken by decision-makers sharing broad alignment with affected stakeholders. However, criteria, priorities and procedures for decision-making are sometimes non-transparent, frequently vary considerably across equivalent decision-bodies, do not always consider the broader benefits of new health-measures, and therefore do not necessarily adequately represent the relevant stakeholder-spectrum. Areas covered: To address these issues in the context of the evaluation of new vaccines, we have proposed a first baseline set of core evaluation criteria, primarily selected by members of the vaccine research community, and suggested their implementation in vaccine evaluation procedures. In this communication, we review the consequences and utility of stakeholder-centered core considerations to increase transparency in and accountability of decision-making procedures, in general, and of the benefits gained by their inclusion in Multi-Criteria-Decision-Analysis tools, exemplified by SMART Vaccines, specifically. Expert commentary: To increase effectiveness and comparability of health decision outcomes, decision procedures should be properly standardized across equivalent (national) decision bodies. To this end, including stakeholder-centered criteria in decision procedures would significantly increase their transparency and accountability, support international capacity building to improve health, and reduce societal costs and inequity resulting from suboptimal health decision-making.

  6. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  7. Bayes multiple decision functions

    OpenAIRE

    Wu, Wensong; Peña, Edsel A.

    2013-01-01

    This paper deals with the problem of simultaneously making many ($M$) binary decisions based on one realization of a random data matrix $\\mathbf{X}$. $M$ is typically large and $\\mathbf{X}$ will usually have $M$ rows associated with each of the $M$ decisions to make, but for each row the data may be low dimensional. Such problems arise in many practical areas such as the biological and medical sciences, where the available dataset is from microarrays or other high-throughput technology and wi...

  8. Translating microarray data for diagnostic testing in childhood leukaemia

    International Nuclear Information System (INIS)

    Hoffmann, Katrin; Firth, Martin J; Beesley, Alex H; Klerk, Nicholas H de; Kees, Ursula R

    2006-01-01

    Recent findings from microarray studies have raised the prospect of a standardized diagnostic gene expression platform to enhance accurate diagnosis and risk stratification in paediatric acute lymphoblastic leukaemia (ALL). However, the robustness as well as the format for such a diagnostic test remains to be determined. As a step towards clinical application of these findings, we have systematically analyzed a published ALL microarray data set using Robust Multi-array Analysis (RMA) and Random Forest (RF). We examined published microarray data from 104 ALL patients specimens, that represent six different subgroups defined by cytogenetic features and immunophenotypes. Using the decision-tree based supervised learning algorithm Random Forest (RF), we determined a small set of genes for optimal subgroup distinction and subsequently validated their predictive power in an independent patient cohort. We achieved very high overall ALL subgroup prediction accuracies of about 98%, and were able to verify the robustness of these genes in an independent panel of 68 specimens obtained from a different institution and processed in a different laboratory. Our study established that the selection of discriminating genes is strongly dependent on the analysis method. This may have profound implications for clinical use, particularly when the classifier is reduced to a small set of genes. We have demonstrated that as few as 26 genes yield accurate class prediction and importantly, almost 70% of these genes have not been previously identified as essential for class distinction of the six ALL subgroups. Our finding supports the feasibility of qRT-PCR technology for standardized diagnostic testing in paediatric ALL and should, in conjunction with conventional cytogenetics lead to a more accurate classification of the disease. In addition, we have demonstrated that microarray findings from one study can be confirmed in an independent study, using an entirely independent patient cohort

  9. The Plasmodium falciparum Sexual Development Transcriptome: A Microarray Analysis using Ontology-Based Pattern Identification

    National Research Council Canada - National Science Library

    Young, Jason A; Fivelman, Quinton L; Blair, Peter L; de la Vega, Patricia; Le Roch, Karine G; Zhou, Yingyao; Carucci, Daniel J; Baker, David A; Winzeler, Elizabeth A

    2005-01-01

    ... a full-genome high-density oligonucleotide microarray. The interpretation of this transcriptional data was aided by applying a novel knowledge-based data-mining algorithm termed ontology-based pattern identification (OPI...

  10. Classification across gene expression microarray studies

    Directory of Open Access Journals (Sweden)

    Kuner Ruprecht

    2009-12-01

    Full Text Available Abstract Background The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive and histological grade (low/high of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM, predictive analysis of microarrays (PAM, random forest (RF and k-top scoring pairs (kTSP. Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results For each individual study the generalization error was benchmarked via complete cross-validation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In

  11. A DNA Microarray-Based Assay to Detect Dual Infection with Two Dengue Virus Serotypes

    Directory of Open Access Journals (Sweden)

    Alvaro Díaz-Badillo

    2014-04-01

    Full Text Available Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples.

  12. A microarray-based genotyping and genetic mapping approach for highly heterozygous outcrossing species enables localization of a large fraction of the unassembled Populus trichocarpa genome sequence.

    Science.gov (United States)

    Drost, Derek R; Novaes, Evandro; Boaventura-Novaes, Carolina; Benedict, Catherine I; Brown, Ryan S; Yin, Tongming; Tuskan, Gerald A; Kirst, Matias

    2009-06-01

    Microarrays have demonstrated significant power for genome-wide analyses of gene expression, and recently have also revolutionized the genetic analysis of segregating populations by genotyping thousands of loci in a single assay. Although microarray-based genotyping approaches have been successfully applied in yeast and several inbred plant species, their power has not been proven in an outcrossing species with extensive genetic diversity. Here we have developed methods for high-throughput microarray-based genotyping in such species using a pseudo-backcross progeny of 154 individuals of Populus trichocarpa and P. deltoides analyzed with long-oligonucleotide in situ-synthesized microarray probes. Our analysis resulted in high-confidence genotypes for 719 single-feature polymorphism (SFP) and 1014 gene expression marker (GEM) candidates. Using these genotypes and an established microsatellite (SSR) framework map, we produced a high-density genetic map comprising over 600 SFPs, GEMs and SSRs. The abundance of gene-based markers allowed us to localize over 35 million base pairs of previously unplaced whole-genome shotgun (WGS) scaffold sequence to putative locations in the genome of P. trichocarpa. A high proportion of sampled scaffolds could be verified for their placement with independently mapped SSRs, demonstrating the previously un-utilized power that high-density genotyping can provide in the context of map-based WGS sequence reassembly. Our results provide a substantial contribution to the continued improvement of the Populus genome assembly, while demonstrating the feasibility of microarray-based genotyping in a highly heterozygous population. The strategies presented are applicable to genetic mapping efforts in all plant species with similarly high levels of genetic diversity.

  13. 16S rRNA gene-based phylogenetic microarray for simultaneous identification of members of the genus Burkholderia.

    Science.gov (United States)

    Schönmann, Susan; Loy, Alexander; Wimmersberger, Céline; Sobek, Jens; Aquino, Catharine; Vandamme, Peter; Frey, Beat; Rehrauer, Hubert; Eberl, Leo

    2009-04-01

    For cultivation-independent and highly parallel analysis of members of the genus Burkholderia, an oligonucleotide microarray (phylochip) consisting of 131 hierarchically nested 16S rRNA gene-targeted oligonucleotide probes was developed. A novel primer pair was designed for selective amplification of a 1.3 kb 16S rRNA gene fragment of Burkholderia species prior to microarray analysis. The diagnostic performance of the microarray for identification and differentiation of Burkholderia species was tested with 44 reference strains of the genera Burkholderia, Pandoraea, Ralstonia and Limnobacter. Hybridization patterns based on presence/absence of probe signals were interpreted semi-automatically using the novel likelihood-based strategy of the web-tool Phylo- Detect. Eighty-eight per cent of the reference strains were correctly identified at the species level. The evaluated microarray was applied to investigate shifts in the Burkholderia community structure in acidic forest soil upon addition of cadmium, a condition that selected for Burkholderia species. The microarray results were in agreement with those obtained from phylogenetic analysis of Burkholderia 16S rRNA gene sequences recovered from the same cadmiumcontaminated soil, demonstrating the value of the Burkholderia phylochip for determinative and environmental studies.

  14. Direct integration of intensity-level data from Affymetrix and Illumina microarrays improves statistical power for robust reanalysis

    Directory of Open Access Journals (Sweden)

    Turnbull Arran K

    2012-08-01

    Full Text Available Abstract Background Affymetrix GeneChips and Illumina BeadArrays are the most widely used commercial single channel gene expression microarrays. Public data repositories are an extremely valuable resource, providing array-derived gene expression measurements from many thousands of experiments. Unfortunately many of these studies are underpowered and it is desirable to improve power by combining data from more than one study; we sought to determine whether platform-specific bias precludes direct integration of probe intensity signals for combined reanalysis. Results Using Affymetrix and Illumina data from the microarray quality control project, from our own clinical samples, and from additional publicly available datasets we evaluated several approaches to directly integrate intensity level expression data from the two platforms. After mapping probe sequences to Ensembl genes we demonstrate that, ComBat and cross platform normalisation (XPN, significantly outperform mean-centering and distance-weighted discrimination (DWD in terms of minimising inter-platform variance. In particular we observed that DWD, a popular method used in a number of previous studies, removed systematic bias at the expense of genuine biological variability, potentially reducing legitimate biological differences from integrated datasets. Conclusion Normalised and batch-corrected intensity-level data from Affymetrix and Illumina microarrays can be directly combined to generate biologically meaningful results with improved statistical power for robust, integrated reanalysis.

  15. Radioactive cDNA microarray in neurospsychiatry

    International Nuclear Information System (INIS)

    Choe, Jae Gol; Shin, Kyung Ho; Lee, Min Soo; Kim, Meyoung Kon

    2003-01-01

    Microarray technology allows the simultaneous analysis of gene expression patterns of thousands of genes, in a systematic fashion, under a similar set of experimental conditions, thus making the data highly comparable. In some cases arrays are used simply as a primary screen leading to downstream molecular characterization of individual gene candidates. In other cases, the goal of expression profiling is to begin to identify complex regulatory networks underlying developmental processes and disease states. Microarrays were originally used with cell lines or other simple model systems. More recently, microarrays have been used in the analysis of more complex biological tissues including neural systems and the brain. The application of cDNA arrays in neuropsychiatry has lagged behind other fields for a number of reasons. These include a requirement for a large amount of input probe RNA in fluorescent-glass based array systems and the cellular complexity introduced by multicellular brain and neural tissues. An additional factor that impacts the general use of microarrays in neuropsychiatry is the lack of availability of sequenced clone sets from model systems. While human cDNA clones have been widely available, high quality rat, mouse, and drosophilae, among others are just becoming widely available. A final factor in the application of cDNA microarrays in neuropsychiatry is cost of commercial arrays. As academic microarray facilitates become more commonplace custom made arrays will become more widely available at a lower cost allowing more widespread applications. In summary, microarray technology is rapidly having an impact on many areas of biomedical research. Radioisotope-nylon based microarrays offer alternatives that may in some cases be more sensitive, flexible, inexpensive, and universal as compared to other array formats, such as fluorescent-glass arrays. In some situations of limited RNA or exotic species, radioactive membrane microarrays may be the most

  16. Radioactive cDNA microarray in neurospsychiatry

    Energy Technology Data Exchange (ETDEWEB)

    Choe, Jae Gol; Shin, Kyung Ho; Lee, Min Soo; Kim, Meyoung Kon [Korea University Medical School, Seoul (Korea, Republic of)

    2003-02-01

    Microarray technology allows the simultaneous analysis of gene expression patterns of thousands of genes, in a systematic fashion, under a similar set of experimental conditions, thus making the data highly comparable. In some cases arrays are used simply as a primary screen leading to downstream molecular characterization of individual gene candidates. In other cases, the goal of expression profiling is to begin to identify complex regulatory networks underlying developmental processes and disease states. Microarrays were originally used with cell lines or other simple model systems. More recently, microarrays have been used in the analysis of more complex biological tissues including neural systems and the brain. The application of cDNA arrays in neuropsychiatry has lagged behind other fields for a number of reasons. These include a requirement for a large amount of input probe RNA in fluorescent-glass based array systems and the cellular complexity introduced by multicellular brain and neural tissues. An additional factor that impacts the general use of microarrays in neuropsychiatry is the lack of availability of sequenced clone sets from model systems. While human cDNA clones have been widely available, high quality rat, mouse, and drosophilae, among others are just becoming widely available. A final factor in the application of cDNA microarrays in neuropsychiatry is cost of commercial arrays. As academic microarray facilitates become more commonplace custom made arrays will become more widely available at a lower cost allowing more widespread applications. In summary, microarray technology is rapidly having an impact on many areas of biomedical research. Radioisotope-nylon based microarrays offer alternatives that may in some cases be more sensitive, flexible, inexpensive, and universal as compared to other array formats, such as fluorescent-glass arrays. In some situations of limited RNA or exotic species, radioactive membrane microarrays may be the most

  17. Development, characterization and experimental validation of a cultivated sunflower (Helianthus annuus L. gene expression oligonucleotide microarray.

    Directory of Open Access Journals (Sweden)

    Paula Fernandez

    Full Text Available Oligonucleotide-based microarrays with accurate gene coverage represent a key strategy for transcriptional studies in orphan species such as sunflower, H. annuus L., which lacks full genome sequences. The goal of this study was the development and functional annotation of a comprehensive sunflower unigene collection and the design and validation of a custom sunflower oligonucleotide-based microarray. A large scale EST (>130,000 ESTs curation, assembly and sequence annotation was performed using Blast2GO (www.blast2go.de. The EST assembly comprises 41,013 putative transcripts (12,924 contigs and 28,089 singletons. The resulting Sunflower Unigen Resource (SUR version 1.0 was used to design an oligonucleotide-based Agilent microarray for cultivated sunflower. This microarray includes a total of 42,326 features: 1,417 Agilent controls, 74 control probes for sunflower replicated 10 times (740 controls and 40,169 different non-control probes. Microarray performance was validated using a model experiment examining the induction of senescence by water deficit. Pre-processing and differential expression analysis of Agilent microarrays was performed using the Bioconductor limma package. The analyses based on p-values calculated by eBayes (p<0.01 allowed the detection of 558 differentially expressed genes between water stress and control conditions; from these, ten genes were further validated by qPCR. Over-represented ontologies were identified using FatiScan in the Babelomics suite. This work generated a curated and trustable sunflower unigene collection, and a custom, validated sunflower oligonucleotide-based microarray using Agilent technology. Both the curated unigene collection and the validated oligonucleotide microarray provide key resources for sunflower genome analysis, transcriptional studies, and molecular breeding for crop improvement.

  18. Development of a genotyping microarray for Usher syndrome.

    Science.gov (United States)

    Cremers, Frans P M; Kimberling, William J; Külm, Maigi; de Brouwer, Arjan P; van Wijk, Erwin; te Brinke, Heleen; Cremers, Cor W R J; Hoefsloot, Lies H; Banfi, Sandro; Simonelli, Francesca; Fleischhauer, Johannes C; Berger, Wolfgang; Kelley, Phil M; Haralambous, Elene; Bitner-Glindzicz, Maria; Webster, Andrew R; Saihan, Zubin; De Baere, Elfride; Leroy, Bart P; Silvestri, Giuliana; McKay, Gareth J; Koenekoop, Robert K; Millan, Jose M; Rosenberg, Thomas; Joensuu, Tarja; Sankila, Eeva-Marja; Weil, Dominique; Weston, Mike D; Wissinger, Bernd; Kremer, Hannie

    2007-02-01

    Usher syndrome, a combination of retinitis pigmentosa (RP) and sensorineural hearing loss with or without vestibular dysfunction, displays a high degree of clinical and genetic heterogeneity. Three clinical subtypes can be distinguished, based on the age of onset and severity of the hearing impairment, and the presence or absence of vestibular abnormalities. Thus far, eight genes have been implicated in the syndrome, together comprising 347 protein-coding exons. To improve DNA diagnostics for patients with Usher syndrome, we developed a genotyping microarray based on the arrayed primer extension (APEX) method. Allele-specific oligonucleotides corresponding to all 298 Usher syndrome-associated sequence variants known to date, 76 of which are novel, were arrayed. Approximately half of these variants were validated using original patient DNAs, which yielded an accuracy of >98%. The efficiency of the Usher genotyping microarray was tested using DNAs from 370 unrelated European and American patients with Usher syndrome. Sequence variants were identified in 64/140 (46%) patients with Usher syndrome type I, 45/189 (24%) patients with Usher syndrome type II, 6/21 (29%) patients with Usher syndrome type III and 6/20 (30%) patients with atypical Usher syndrome. The chip also identified two novel sequence variants, c.400C>T (p.R134X) in PCDH15 and c.1606T>C (p.C536S) in USH2A. The Usher genotyping microarray is a versatile and affordable screening tool for Usher syndrome. Its efficiency will improve with the addition of novel sequence variants with minimal extra costs, making it a very useful first-pass screening tool.

  19. Shared probe design and existing microarray reanalysis using PICKY

    Directory of Open Access Journals (Sweden)

    Chou Hui-Hsien

    2010-04-01

    Full Text Available Abstract Background Large genomes contain families of highly similar genes that cannot be individually identified by microarray probes. This limitation is due to thermodynamic restrictions and cannot be resolved by any computational method. Since gene annotations are updated more frequently than microarrays, another common issue facing microarray users is that existing microarrays must be routinely reanalyzed to determine probes that are still useful with respect to the updated annotations. Results PICKY 2.0 can design shared probes for sets of genes that cannot be individually identified using unique probes. PICKY 2.0 uses novel algorithms to track sharable regions among genes and to strictly distinguish them from other highly similar but nontarget regions during thermodynamic comparisons. Therefore, PICKY does not sacrifice the quality of shared probes when choosing them. The latest PICKY 2.1 includes the new capability to reanalyze existing microarray probes against updated gene sets to determine probes that are still valid to use. In addition, more precise nonlinear salt effect estimates and other improvements are added, making PICKY 2.1 more versatile to microarray users. Conclusions Shared probes allow expressed gene family members to be detected; this capability is generally more desirable than not knowing anything about these genes. Shared probes also enable the design of cross-genome microarrays, which facilitate multiple species identification in environmental samples. The new nonlinear salt effect calculation significantly increases the precision of probes at a lower buffer salt concentration, and the probe reanalysis function improves existing microarray result interpretations.

  20. Study of hepatitis B virus gene mutations with enzymatic colorimetry-based DNA microarray.

    Science.gov (United States)

    Mao, Hailei; Wang, Huimin; Zhang, Donglei; Mao, Hongju; Zhao, Jianlong; Shi, Jian; Cui, Zhichu

    2006-01-01

    To establish a modified microarray method for detecting HBV gene mutations in the clinic. Site-specific oligonucleotide probes were immobilized to microarray slides and hybridized to biotin-labeled HBV gene fragments amplified from two-step PCR. Hybridized targets were transferred to nitrocellulose membranes, followed by intensity measurement using BCIP/NBT colorimetry. HBV genes from 99 Hepatitis B patients and 40 healthy blood donors were analyzed. Mutation frequencies of HBV pre-core/core and basic core promoter (BCP) regions were found to be significantly higher in the patient group (42%, 40% versus 2.5%, 5%, P colorimetry method exhibited the same level of sensitivity and reproducibility. An enzymatic colorimetry-based DNA microarray assay was successfully established to monitor HBV mutations. Pre-core/core and BCP mutations of HBV genes could be major causes of HBV infection in HBeAg-negative patients and could also be relevant to chronicity and aggravation of hepatitis B.

  1. Development and assessment of microarray-based DNA fingerprinting in Eucalyptus grandis.

    Science.gov (United States)

    Lezar, Sabine; Myburg, A A; Berger, D K; Wingfield, M J; Wingfield, B D

    2004-11-01

    Development of improved Eucalyptus genotypes involves the routine identification of breeding stock and superior clones. Currently, microsatellites and random amplified polymorphic DNA markers are the most widely used DNA-based techniques for fingerprinting of these trees. While these techniques have provided rapid and powerful fingerprinting assays, they are constrained by their reliance on gel or capillary electrophoresis, and therefore, relatively low throughput of fragment analysis. In contrast, recently developed microarray technology holds the promise of parallel analysis of thousands of markers in plant genomes. The aim of this study was to develop a DNA fingerprinting chip for Eucalyptus grandis and to investigate its usefulness for fingerprinting of eucalypt trees. A prototype chip was prepared using a partial genomic library from total genomic DNA of 23 E. grandis trees, of which 22 were full siblings. A total of 384 cloned genomic fragments were individually amplified and arrayed onto glass slides. DNA fingerprints were obtained for 17 individuals by hybridizing labeled genome representations of the individual trees to the 384-element chip. Polymorphic DNA fragments were identified by evaluating the binary distribution of their background-corrected signal intensities across full-sib individuals. Among 384 DNA fragments on the chip, 104 (27%) were found to be polymorphic. Hybridization of these polymorphic fragments was highly repeatable (R2>0.91) within the E. grandis individuals, and they allowed us to identify all 17 full-sib individuals. Our results suggest that DNA microarrays can be used to effectively fingerprint large numbers of closely related Eucalyptus trees.

  2. Incorporation of gene-specific variability improves expression analysis using high-density DNA microarrays

    Directory of Open Access Journals (Sweden)

    Spitznagel Edward

    2003-11-01

    Full Text Available Abstract Background The assessment of data reproducibility is essential for application of microarray technology to exploration of biological pathways and disease states. Technical variability in data analysis largely depends on signal intensity. Within that context, the reproducibility of individual probe sets has not been hitherto addressed. Results We used an extraordinarily large replicate data set derived from human placental trophoblast to analyze probe-specific contribution to variability of gene expression. We found that signal variability, in addition to being signal-intensity dependant, is probe set-specific. Importantly, we developed a novel method to quantify the contribution of this probe set-specific variability. Furthermore, we devised a formula that incorporates a priori-computed, replicate-based information on probe set- and intensity-specific variability in determination of expression changes even without technical replicates. Conclusion The strategy of incorporating probe set-specific variability is superior to analysis based on arbitrary fold-change thresholds. We recommend its incorporation to any computation of gene expression changes using high-density DNA microarrays. A Java application implementing our T-score is available at http://www.sadovsky.wustl.edu/tscore.html.

  3. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  4. Advanced Data Mining of Leukemia Cells Micro-Arrays

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2009-12-01

    Full Text Available This paper provides continuation and extensions of previous research by Segall and Pierce (2009a that discussed data mining for micro-array databases of Leukemia cells for primarily self-organized maps (SOM. As Segall and Pierce (2009a and Segall and Pierce (2009b the results of applying data mining are shown and discussed for the data categories of microarray databases of HL60, Jurkat, NB4 and U937 Leukemia cells that are also described in this article. First, a background section is provided on the work of others pertaining to the applications of data mining to micro-array databases of Leukemia cells and micro-array databases in general. As noted in predecessor article by Segall and Pierce (2009a, micro-array databases are one of the most popular functional genomics tools in use today. This research in this paper is intended to use advanced data mining technologies for better interpretations and knowledge discovery as generated by the patterns of gene expressions of HL60, Jurkat, NB4 and U937 Leukemia cells. The advanced data mining performed entailed using other data mining tools such as cubic clustering criterion, variable importance rankings, decision trees, and more detailed examinations of data mining statistics and study of other self-organized maps (SOM clustering regions of workspace as generated by SAS Enterprise Miner version 4. Conclusions and future directions of the research are also presented.

  5. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  6. The application of DNA microarrays in gene expression analysis

    NARCIS (Netherlands)

    Hal, van N.L.W.; Vorst, O.; Houwelingen, van A.M.M.L.; Kok, E.J.; Peijnenburg, A.A.C.M.; Aharoni, A.; Tunen, van A.J.; Keijer, J.

    2000-01-01

    DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed.

  7. High quality protein microarray using in situ protein purification

    Directory of Open Access Journals (Sweden)

    Fleischmann Robert D

    2009-08-01

    Full Text Available Abstract Background In the postgenomic era, high throughput protein expression and protein microarray technologies have progressed markedly permitting screening of therapeutic reagents and discovery of novel protein functions. Hexa-histidine is one of the most commonly used fusion tags for protein expression due to its small size and convenient purification via immobilized metal ion affinity chromatography (IMAC. This purification process has been adapted to the protein microarray format, but the quality of in situ His-tagged protein purification on slides has not been systematically evaluated. We established methods to determine the level of purification of such proteins on metal chelate-modified slide surfaces. Optimized in situ purification of His-tagged recombinant proteins has the potential to become the new gold standard for cost-effective generation of high-quality and high-density protein microarrays. Results Two slide surfaces were examined, chelated Cu2+ slides suspended on a polyethylene glycol (PEG coating and chelated Ni2+ slides immobilized on a support without PEG coating. Using PEG-coated chelated Cu2+ slides, consistently higher purities of recombinant proteins were measured. An optimized wash buffer (PBST composed of 10 mM phosphate buffer, 2.7 mM KCl, 140 mM NaCl and 0.05% Tween 20, pH 7.4, further improved protein purity levels. Using Escherichia coli cell lysates expressing 90 recombinant Streptococcus pneumoniae proteins, 73 proteins were successfully immobilized, and 66 proteins were in situ purified with greater than 90% purity. We identified several antigens among the in situ-purified proteins via assays with anti-S. pneumoniae rabbit antibodies and a human patient antiserum, as a demonstration project of large scale microarray-based immunoproteomics profiling. The methodology is compatible with higher throughput formats of in vivo protein expression, eliminates the need for resin-based purification and circumvents

  8. MARS: Microarray analysis, retrieval, and storage system

    Directory of Open Access Journals (Sweden)

    Scheideler Marcel

    2005-04-01

    Full Text Available Abstract Background Microarray analysis has become a widely used technique for the study of gene-expression patterns on a genomic scale. As more and more laboratories are adopting microarray technology, there is a need for powerful and easy to use microarray databases facilitating array fabrication, labeling, hybridization, and data analysis. The wealth of data generated by this high throughput approach renders adequate database and analysis tools crucial for the pursuit of insights into the transcriptomic behavior of cells. Results MARS (Microarray Analysis and Retrieval System provides a comprehensive MIAME supportive suite for storing, retrieving, and analyzing multi color microarray data. The system comprises a laboratory information management system (LIMS, a quality control management, as well as a sophisticated user management system. MARS is fully integrated into an analytical pipeline of microarray image analysis, normalization, gene expression clustering, and mapping of gene expression data onto biological pathways. The incorporation of ontologies and the use of MAGE-ML enables an export of studies stored in MARS to public repositories and other databases accepting these documents. Conclusion We have developed an integrated system tailored to serve the specific needs of microarray based research projects using a unique fusion of Web based and standalone applications connected to the latest J2EE application server technology. The presented system is freely available for academic and non-profit institutions. More information can be found at http://genome.tugraz.at.

  9. Chromosomal microarrays testing in children with developmental disabilities and congenital anomalies

    Directory of Open Access Journals (Sweden)

    Guillermo Lay-Son

    2015-04-01

    Full Text Available OBJECTIVES: Clinical use of microarray-based techniques for the analysis of many developmental disorders has emerged during the last decade. Thus, chromosomal microarray has been positioned as a first-tier test. This study reports the first experience in a Chilean cohort. METHODS: Chilean patients with developmental disabilities and congenital anomalies were studied with a high-density microarray (CytoScan(tm HD Array, Affymetrix, Inc., Santa Clara, CA, USA. Patients had previous cytogenetic studies with either a normal result or a poorly characterized anomaly. RESULTS: This study tested 40 patients selected by two or more criteria, including: major congenital anomalies, facial dysmorphism, developmental delay, and intellectual disability. Copy number variants (CNVs were found in 72.5% of patients, while a pathogenic CNV was found in 25% of patients and a CNV of uncertain clinical significance was found in 2.5% of patients. CONCLUSION: Chromosomal microarray analysis is a useful and powerful tool for diagnosis of developmental diseases, by allowing accurate diagnosis, improving the diagnosis rate, and discovering new etiologies. The higher cost is a limitation for widespread use in this setting.

  10. Facilitating functional annotation of chicken microarray data

    Directory of Open Access Journals (Sweden)

    Gresham Cathy R

    2009-10-01

    Full Text Available Abstract Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO. However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and

  11. Improving Adolescent Judgment and Decision Making

    Science.gov (United States)

    Dansereau, Donald F.; Knight, Danica K.; Flynn, Patrick M.

    2013-01-01

    Human judgment and decision making (JDM) has substantial room for improvement, especially among adolescents. Increased technological and social complexity “ups the ante” for developing impactful JDM interventions and aids. Current explanatory advances in this field emphasize dual processing models that incorporate both experiential and analytic processing systems. According to these models, judgment and decisions based on the experiential system are rapid and stem from automatic reference to previously stored episodes. Those based on the analytic system are viewed as slower and consciously developed. These models also hypothesize that metacognitive (self-monitoring) activities embedded in the analytic system influence how and when the two systems are used. What is not included in these models is the development of an intersection between the two systems. Because such an intersection is strongly suggested by memory and educational research as the basis of wisdom/expertise, the present paper describes an Integrated Judgment and Decision-Making Model (IJDM) that incorporates this component. Wisdom/expertise is hypothesized to contain a collection of schematic structures that can emerge from the accumulation of similar episodes or repeated analytic practice. As will be argued, in comparisons to dual system models, the addition of this component provides a broader basis for selecting and designing interventions to improve adolescent JDM. Its development also has implications for generally enhancing cognitive interventions by adopting principles from athletic training to create automated, expert behaviors. PMID:24391350

  12. Staged decision making based on probabilistic forecasting

    Science.gov (United States)

    Booister, Nikéh; Verkade, Jan; Werner, Micha; Cranston, Michael; Cumiskey, Lydia; Zevenbergen, Chris

    2016-04-01

    Flood forecasting systems reduce, but cannot eliminate uncertainty about the future. Probabilistic forecasts explicitly show that uncertainty remains. However, as - compared to deterministic forecasts - a dimension is added ('probability' or 'likelihood'), with this added dimension decision making is made slightly more complicated. A technique of decision support is the cost-loss approach, which defines whether or not to issue a warning or implement mitigation measures (risk-based method). With the cost-loss method a warning will be issued when the ratio of the response costs to the damage reduction is less than or equal to the probability of the possible flood event. This cost-loss method is not widely used, because it motivates based on only economic values and is a technique that is relatively static (no reasoning, yes/no decision). Nevertheless it has high potential to improve risk-based decision making based on probabilistic flood forecasting because there are no other methods known that deal with probabilities in decision making. The main aim of this research was to explore the ways of making decision making based on probabilities with the cost-loss method better applicable in practice. The exploration began by identifying other situations in which decisions were taken based on uncertain forecasts or predictions. These cases spanned a range of degrees of uncertainty: from known uncertainty to deep uncertainty. Based on the types of uncertainties, concepts of dealing with situations and responses were analysed and possible applicable concepts where chosen. Out of this analysis the concepts of flexibility and robustness appeared to be fitting to the existing method. Instead of taking big decisions with bigger consequences at once, the idea is that actions and decisions are cut-up into smaller pieces and finally the decision to implement is made based on economic costs of decisions and measures and the reduced effect of flooding. The more lead-time there is in

  13. Improvement of Statistical Decisions under Parametric Uncertainty

    Science.gov (United States)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Rozevskis, Uldis

    2011-10-01

    A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Decision-making under uncertainty is a central problem in statistical inference, and has been formally studied in virtually all approaches to inference. The aim of the present paper is to show how the invariant embedding technique, the idea of which belongs to the authors, may be employed in the particular case of finding the improved statistical decisions under parametric uncertainty. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant decision rule, which has smaller risk than any of the well-known decision rules. To illustrate the proposed technique, application examples are given.

  14. Spot detection and image segmentation in DNA microarray data.

    Science.gov (United States)

    Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune

    2005-01-01

    Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.

  15. RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.

    Science.gov (United States)

    Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor

    2013-07-01

    This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Development of a genotyping microarray for Usher syndrome

    Science.gov (United States)

    Cremers, Frans P M; Kimberling, William J; Külm, Maigi; de Brouwer, Arjan P; van Wijk, Erwin; te Brinke, Heleen; Cremers, Cor W R J; Hoefsloot, Lies H; Banfi, Sandro; Simonelli, Francesca; Fleischhauer, Johannes C; Berger, Wolfgang; Kelley, Phil M; Haralambous, Elene; Bitner‐Glindzicz, Maria; Webster, Andrew R; Saihan, Zubin; De Baere, Elfride; Leroy, Bart P; Silvestri, Giuliana; McKay, Gareth J; Koenekoop, Robert K; Millan, Jose M; Rosenberg, Thomas; Joensuu, Tarja; Sankila, Eeva‐Marja; Weil, Dominique; Weston, Mike D; Wissinger, Bernd; Kremer, Hannie

    2007-01-01

    Background Usher syndrome, a combination of retinitis pigmentosa (RP) and sensorineural hearing loss with or without vestibular dysfunction, displays a high degree of clinical and genetic heterogeneity. Three clinical subtypes can be distinguished, based on the age of onset and severity of the hearing impairment, and the presence or absence of vestibular abnormalities. Thus far, eight genes have been implicated in the syndrome, together comprising 347 protein‐coding exons. Methods: To improve DNA diagnostics for patients with Usher syndrome, we developed a genotyping microarray based on the arrayed primer extension (APEX) method. Allele‐specific oligonucleotides corresponding to all 298 Usher syndrome‐associated sequence variants known to date, 76 of which are novel, were arrayed. Results Approximately half of these variants were validated using original patient DNAs, which yielded an accuracy of >98%. The efficiency of the Usher genotyping microarray was tested using DNAs from 370 unrelated European and American patients with Usher syndrome. Sequence variants were identified in 64/140 (46%) patients with Usher syndrome type I, 45/189 (24%) patients with Usher syndrome type II, 6/21 (29%) patients with Usher syndrome type III and 6/20 (30%) patients with atypical Usher syndrome. The chip also identified two novel sequence variants, c.400C>T (p.R134X) in PCDH15 and c.1606T>C (p.C536S) in USH2A. Conclusion The Usher genotyping microarray is a versatile and affordable screening tool for Usher syndrome. Its efficiency will improve with the addition of novel sequence variants with minimal extra costs, making it a very useful first‐pass screening tool. PMID:16963483

  17. Nanotechnology: moving from microarrays toward nanoarrays.

    Science.gov (United States)

    Chen, Hua; Li, Jun

    2007-01-01

    Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.

  18. The application of DNA microarrays in gene expression analysis.

    Science.gov (United States)

    van Hal, N L; Vorst, O; van Houwelingen, A M; Kok, E J; Peijnenburg, A; Aharoni, A; van Tunen, A J; Keijer, J

    2000-03-31

    DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed. These comprise array manufacturing and design, array hybridisation, scanning, and data handling. Furthermore, it is discussed how DNA microarrays can be applied in the working fields of: safety, functionality and health of food and gene discovery and pathway engineering in plants.

  19. Training ANFIS structure using genetic algorithm for liver cancer classification based on microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Bülent Haznedar

    2017-02-01

    Full Text Available Classification is an important data mining technique, which is used in many fields mostly exemplified as medicine, genetics and biomedical engineering. The number of studies about classification of the datum on DNA microarray gene expression is specifically increased in recent years. However, because of the reasons as the abundance of gene numbers in the datum as microarray gene expressions and the nonlinear relations mostly across those datum, the success of conventional classification algorithms can be limited. Because of these reasons, the interest on classification methods which are based on artificial intelligence to solve the problem on classification has been gradually increased in recent times. In this study, a hybrid approach which is based on Adaptive Neuro-Fuzzy Inference System (ANFIS and Genetic Algorithm (GA are suggested in order to classify liver microarray cancer data set. Simulation results are compared with the results of other methods. According to the results obtained, it is seen that the recommended method is better than the other methods.

  20. Microarray-based RNA profiling of breast cancer

    DEFF Research Database (Denmark)

    Larsen, Martin J; Thomassen, Mads; Tan, Qihua

    2014-01-01

    analyzed the same 234 breast cancers on two different microarray platforms. One dataset contained known batch-effects associated with the fabrication procedure used. The aim was to assess the significance of correcting for systematic batch-effects when integrating data from different platforms. We here...

  1. "Harshlighting" small blemishes on microarrays

    Directory of Open Access Journals (Sweden)

    Wittkowski Knut M

    2005-03-01

    Full Text Available Abstract Background Microscopists are familiar with many blemishes that fluorescence images can have due to dust and debris, glass flaws, uneven distribution of fluids or surface coatings, etc. Microarray scans show similar artefacts, which affect the analysis, particularly when one tries to detect subtle changes. However, most blemishes are hard to find by the unaided eye, particularly in high-density oligonucleotide arrays (HDONAs. Results We present a method that harnesses the statistical power provided by having several HDONAs available, which are obtained under similar conditions except for the experimental factor. This method "harshlights" blemishes and renders them evident. We find empirically that about 25% of our chips are blemished, and we analyze the impact of masking them on screening for differentially expressed genes. Conclusion Experiments attempting to assess subtle expression changes should be carefully screened for blemishes on the chips. The proposed method provides investigators with a novel robust approach to improve the sensitivity of microarray analyses. By utilizing topological information to identify and mask blemishes prior to model based analyses, the method prevents artefacts from confounding the process of background correction, normalization, and summarization.

  2. Flexible hemispheric microarrays of highly pressure-sensitive sensors based on breath figure method.

    Science.gov (United States)

    Wang, Zhihui; Zhang, Ling; Liu, Jin; Jiang, Hao; Li, Chunzhong

    2018-05-30

    Recently, flexible pressure sensors featuring high sensitivity, broad sensing range and real-time detection have aroused great attention owing to their crucial role in the development of artificial intelligent devices and healthcare systems. Herein, highly sensitive pressure sensors based on hemisphere-microarray flexible substrates are fabricated via inversely templating honeycomb structures deriving from a facile and static breath figure process. The interlocked and subtle microstructures greatly improve the sensing characteristics and compressibility of the as-prepared pressure sensor, endowing it a sensitivity as high as 196 kPa-1 and a wide pressure sensing range (0-100 kPa), as well as other superior performance, including a lower detection limit of 0.5 Pa, fast response time (10 000 cycles). Based on the outstanding sensing performance, the potential capability of our pressure sensor in capturing physiological information and recognizing speech signals has been demonstrated, indicating promising application in wearable and intelligent electronics.

  3. Customized oligonucleotide microarray gene expression-based classification of neuroblastoma patients outperforms current clinical risk stratification.

    Science.gov (United States)

    Oberthuer, André; Berthold, Frank; Warnat, Patrick; Hero, Barbara; Kahlert, Yvonne; Spitz, Rüdiger; Ernestus, Karen; König, Rainer; Haas, Stefan; Eils, Roland; Schwab, Manfred; Brors, Benedikt; Westermann, Frank; Fischer, Matthias

    2006-11-01

    To develop a gene expression-based classifier for neuroblastoma patients that reliably predicts courses of the disease. Two hundred fifty-one neuroblastoma specimens were analyzed using a customized oligonucleotide microarray comprising 10,163 probes for transcripts with differential expression in clinical subgroups of the disease. Subsequently, the prediction analysis for microarrays (PAM) was applied to a first set of patients with maximally divergent clinical courses (n = 77). The classification accuracy was estimated by a complete 10-times-repeated 10-fold cross validation, and a 144-gene predictor was constructed from this set. This classifier's predictive power was evaluated in an independent second set (n = 174) by comparing results of the gene expression-based classification with those of risk stratification systems of current trials from Germany, Japan, and the United States. The first set of patients was accurately predicted by PAM (cross-validated accuracy, 99%). Within the second set, the PAM classifier significantly separated cohorts with distinct courses (3-year event-free survival [EFS] 0.86 +/- 0.03 [favorable; n = 115] v 0.52 +/- 0.07 [unfavorable; n = 59] and 3-year overall survival 0.99 +/- 0.01 v 0.84 +/- 0.05; both P model, the PAM predictor classified patients of the second set more accurately than risk stratification of current trials from Germany, Japan, and the United States (P < .001; hazard ratio, 4.756 [95% CI, 2.544 to 8.893]). Integration of gene expression-based class prediction of neuroblastoma patients may improve risk estimation of current neuroblastoma trials.

  4. Validation of the performance of a GMO multiplex screening assay based on microarray detection

    NARCIS (Netherlands)

    Leimanis, S.; Hamels, S.; Naze, F.; Mbongolo, G.; Sneyers, M.; Hochegger, R.; Broll, H.; Roth, L.; Dallmann, K.; Micsinai, A.; Dijk, van J.P.; Kok, E.J.

    2008-01-01

    A new screening method for the detection and identification of GMO, based on the use of multiplex PCR followed by microarray, has been developed and is presented. The technology is based on the identification of quite ubiquitous GMO genetic target elements first amplified by PCR, followed by direct

  5. Improving the scaling normalization for high-density oligonucleotide GeneChip expression microarrays

    Directory of Open Access Journals (Sweden)

    Lu Chao

    2004-07-01

    Full Text Available Abstract Background Normalization is an important step for microarray data analysis to minimize biological and technical variations. Choosing a suitable approach can be critical. The default method in GeneChip expression microarray uses a constant factor, the scaling factor (SF, for every gene on an array. The SF is obtained from a trimmed average signal of the array after excluding the 2% of the probe sets with the highest and the lowest values. Results Among the 76 U34A GeneChip experiments, the total signals on each array showed 25.8% variations in terms of the coefficient of variation, although all microarrays were hybridized with the same amount of biotin-labeled cRNA. The 2% of the probe sets with the highest signals that were normally excluded from SF calculation accounted for 34% to 54% of the total signals (40.7% ± 4.4%, mean ± sd. In comparison with normalization factors obtained from the median signal or from the mean of the log transformed signal, SF showed the greatest variation. The normalization factors obtained from log transformed signals showed least variation. Conclusions Eliminating 40% of the signal data during SF calculation failed to show any benefit. Normalization factors obtained with log transformed signals performed the best. Thus, it is suggested to use the mean of the logarithm transformed data for normalization, rather than the arithmetic mean of signals in GeneChip gene expression microarrays.

  6. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  7. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  8. eSensor: an electrochemical detection-based DNA microarray technology enabling sample-to-answer molecular diagnostics

    Science.gov (United States)

    Liu, Robin H.; Longiaru, Mathew

    2009-05-01

    DNA microarrays are becoming a widespread tool used in life science and drug screening due to its many benefits of miniaturization and integration. Microarrays permit a highly multiplexed DNA analysis. Recently, the development of new detection methods and simplified methodologies has rapidly expanded the use of microarray technologies from predominantly gene expression analysis into the arena of diagnostics. Osmetech's eSensor® is an electrochemical detection platform based on a low-to- medium density DNA hybridization array on a cost-effective printed circuit board substrate. eSensor® has been cleared by FDA for Warfarin sensitivity test and Cystic Fibrosis Carrier Detection. Other genetic-based diagnostic and infectious disease detection tests are under development. The eSensor® platform eliminates the need for an expensive laser-based optical system and fluorescent reagents. It allows one to perform hybridization and detection in a single and small instrument without any fluidic processing and handling. Furthermore, the eSensor® platform is readily adaptable to on-chip sample-to-answer genetic analyses using microfluidics technology. The eSensor® platform provides a cost-effective solution to direct sample-to-answer genetic analysis, and thus have a potential impact in the fields of point-of-care genetic analysis, environmental testing, and biological warfare agent detection.

  9. Microarray-based identification of clinically relevant vaginal bacteria in relation to bacterial vaginosis

    NARCIS (Netherlands)

    Dols, J.A.M.; Smit, P.W.; Kort, R.; Reid, G.; Schuren, F.H.J.; Tempelman, H.; Bontekoe, T.R.; Korporaal, H.; Boon, M.E.

    2011-01-01

    Objective: The objective was to examine the use of a tailor-made DNA microarray containing probes representing the vaginal microbiota to examine bacterial vaginosis. Study Design: One hundred one women attending a health center for HIV testing in South Africa were enrolled. Stained, liquid-based

  10. Microarray-based identification of clinically relevant vaginal bacteria in relation to bacterial vaginosis

    NARCIS (Netherlands)

    Dols, Joke A M; Smit, Pieter W; Kort, Remco; Reid, Gregor; Schuren, Frank H J; Tempelman, Hugo; Bontekoe, Tj Romke; Korporaal, Hans; Boon, Mathilde E

    OBJECTIVE: The objective was to examine the use of a tailor-made DNA microarray containing probes representing the vaginal microbiota to examine bacterial vaginosis. STUDY DESIGN: One hundred one women attending a health center for HIV testing in South Africa were enrolled. Stained, liquid-based

  11. Annotating breast cancer microarray samples using ontologies

    Science.gov (United States)

    Liu, Hongfang; Li, Xin; Yoon, Victoria; Clarke, Robert

    2008-01-01

    As the most common cancer among women, breast cancer results from the accumulation of mutations in essential genes. Recent advance in high-throughput gene expression microarray technology has inspired researchers to use the technology to assist breast cancer diagnosis, prognosis, and treatment prediction. However, the high dimensionality of microarray experiments and public access of data from many experiments have caused inconsistencies which initiated the development of controlled terminologies and ontologies for annotating microarray experiments, such as the standard microarray Gene Expression Data (MGED) ontology (MO). In this paper, we developed BCM-CO, an ontology tailored specifically for indexing clinical annotations of breast cancer microarray samples from the NCI Thesaurus. Our research showed that the coverage of NCI Thesaurus is very limited with respect to i) terms used by researchers to describe breast cancer histology (covering 22 out of 48 histology terms); ii) breast cancer cell lines (covering one out of 12 cell lines); and iii) classes corresponding to the breast cancer grading and staging. By incorporating a wider range of those terms into BCM-CO, we were able to indexed breast cancer microarray samples from GEO using BCM-CO and MGED ontology and developed a prototype system with web interface that allows the retrieval of microarray data based on the ontology annotations. PMID:18999108

  12. The MGED Ontology: a resource for semantics-based description of microarray experiments.

    Science.gov (United States)

    Whetzel, Patricia L; Parkinson, Helen; Causton, Helen C; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Game, Laurence; Heiskanen, Mervi; Morrison, Norman; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; White, Joseph; Stoeckert, Christian J

    2006-04-01

    The generation of large amounts of microarray data and the need to share these data bring challenges for both data management and annotation and highlights the need for standards. MIAME specifies the minimum information needed to describe a microarray experiment and the Microarray Gene Expression Object Model (MAGE-OM) and resulting MAGE-ML provide a mechanism to standardize data representation for data exchange, however a common terminology for data annotation is needed to support these standards. Here we describe the MGED Ontology (MO) developed by the Ontology Working Group of the Microarray Gene Expression Data (MGED) Society. The MO provides terms for annotating all aspects of a microarray experiment from the design of the experiment and array layout, through to the preparation of the biological sample and the protocols used to hybridize the RNA and analyze the data. The MO was developed to provide terms for annotating experiments in line with the MIAME guidelines, i.e. to provide the semantics to describe a microarray experiment according to the concepts specified in MIAME. The MO does not attempt to incorporate terms from existing ontologies, e.g. those that deal with anatomical parts or developmental stages terms, but provides a framework to reference terms in other ontologies and therefore facilitates the use of ontologies in microarray data annotation. The MGED Ontology version.1.2.0 is available as a file in both DAML and OWL formats at http://mged.sourceforge.net/ontologies/index.php. Release notes and annotation examples are provided. The MO is also provided via the NCICB's Enterprise Vocabulary System (http://nciterms.nci.nih.gov/NCIBrowser/Dictionary.do). Stoeckrt@pcbi.upenn.edu Supplementary data are available at Bioinformatics online.

  13. Deep learning for tissue microarray image-based outcome prediction in patients with colorectal cancer

    Science.gov (United States)

    Bychkov, Dmitrii; Turkki, Riku; Haglund, Caj; Linder, Nina; Lundin, Johan

    2016-03-01

    Recent advances in computer vision enable increasingly accurate automated pattern classification. In the current study we evaluate whether a convolutional neural network (CNN) can be trained to predict disease outcome in patients with colorectal cancer based on images of tumor tissue microarray samples. We compare the prognostic accuracy of CNN features extracted from the whole, unsegmented tissue microarray spot image, with that of CNN features extracted from the epithelial and non-epithelial compartments, respectively. The prognostic accuracy of visually assessed histologic grade is used as a reference. The image data set consists of digitized hematoxylin-eosin (H and E) stained tissue microarray samples obtained from 180 patients with colorectal cancer. The patient samples represent a variety of histological grades, have data available on a series of clinicopathological variables including long-term outcome and ground truth annotations performed by experts. The CNN features extracted from images of the epithelial tissue compartment significantly predicted outcome (hazard ratio (HR) 2.08; CI95% 1.04-4.16; area under the curve (AUC) 0.66) in a test set of 60 patients, as compared to the CNN features extracted from unsegmented images (HR 1.67; CI95% 0.84-3.31, AUC 0.57) and visually assessed histologic grade (HR 1.96; CI95% 0.99-3.88, AUC 0.61). As a conclusion, a deep-learning classifier can be trained to predict outcome of colorectal cancer based on images of H and E stained tissue microarray samples and the CNN features extracted from the epithelial compartment only resulted in a prognostic discrimination comparable to that of visually determined histologic grade.

  14. Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.

    Science.gov (United States)

    Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick

    2017-11-03

    In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.

  15. Polyadenylation state microarray (PASTA) analysis.

    Science.gov (United States)

    Beilharz, Traude H; Preiss, Thomas

    2011-01-01

    Nearly all eukaryotic mRNAs terminate in a poly(A) tail that serves important roles in mRNA utilization. In the cytoplasm, the poly(A) tail promotes both mRNA stability and translation, and these functions are frequently regulated through changes in tail length. To identify the scope of poly(A) tail length control in a transcriptome, we developed the polyadenylation state microarray (PASTA) method. It involves the purification of mRNA based on poly(A) tail length using thermal elution from poly(U) sepharose, followed by microarray analysis of the resulting fractions. In this chapter we detail our PASTA approach and describe some methods for bulk and mRNA-specific poly(A) tail length measurements of use to monitor the procedure and independently verify the microarray data.

  16. cDNA microarray screening in food safety

    International Nuclear Information System (INIS)

    Roy, Sashwati; Sen, Chandan K.

    2006-01-01

    The cDNA microarray technology and related bioinformatics tools presents a wide range of novel application opportunities. The technology may be productively applied to address food safety. In this mini-review article, we present an update highlighting the late breaking discoveries that demonstrate the vitality of cDNA microarray technology as a tool to analyze food safety with reference to microbial pathogens and genetically modified foods. In order to bring the microarray technology to mainstream food safety, it is important to develop robust user-friendly tools that may be applied in a field setting. In addition, there needs to be a standardized process for regulatory agencies to interpret and act upon microarray-based data. The cDNA microarray approach is an emergent technology in diagnostics. Its values lie in being able to provide complimentary molecular insight when employed in addition to traditional tests for food safety, as part of a more comprehensive battery of tests

  17. Characterization of adjacent breast tumors using oligonucleotide microarrays

    International Nuclear Information System (INIS)

    Unger, Meredith A; Rishi, Mazhar; Clemmer, Virginia B; Hartman, Jennifer L; Keiper, Elizabeth A; Greshock, Joel D; Chodosh, Lewis A; Liebman, Michael N; Weber, Barbara L

    2001-01-01

    Current methodology often cannot distinguish second primary breast cancers from multifocal disease, a potentially important distinction for clinical management. In the present study we evaluated the use of oligonucleotide-based microarray analysis in determining the clonality of tumors by comparing gene expression profiles. Total RNA was extracted from two tumors with no apparent physical connection that were located in the right breast of an 87-year-old woman diagnosed with invasive ductal carcinoma (IDC). The RNA was hybridized to the Affymetrix Human Genome U95A Gene Chip ® (12,500 known human genes) and analyzed using the Gene Chip Analysis Suite ® 3.3 (Affymetrix, Inc, Santa Clara, CA, USA) and JMPIN ® 3.2.6 (SAS Institute, Inc, Cary, NC, USA). Gene expression profiles of tumors from five additional patients were compared in order to evaluate the heterogeneity in gene expression between tumors with similar clinical characteristics. The adjacent breast tumors had a pairwise correlation coefficient of 0.987, and were essentially indistinguishable by microarray analysis. Analysis of gene expression profiles from different individuals, however, generated a pairwise correlation coefficient of 0.710. Transcriptional profiling may be a useful diagnostic tool for determining tumor clonality and heterogeneity, and may ultimately impact on therapeutic decision making

  18. User-centered design to improve clinical decision support in primary care.

    Science.gov (United States)

    Brunner, Julian; Chuang, Emmeline; Goldzweig, Caroline; Cain, Cindy L; Sugar, Catherine; Yano, Elizabeth M

    2017-08-01

    A growing literature has demonstrated the ability of user-centered design to make clinical decision support systems more effective and easier to use. However, studies of user-centered design have rarely examined more than a handful of sites at a time, and have frequently neglected the implementation climate and organizational resources that influence clinical decision support. The inclusion of such factors was identified by a systematic review as "the most important improvement that can be made in health IT evaluations." (1) Identify the prevalence of four user-centered design practices at United States Veterans Affairs (VA) primary care clinics and assess the perceived utility of clinical decision support at those clinics; (2) Evaluate the association between those user-centered design practices and the perceived utility of clinical decision support. We analyzed clinic-level survey data collected in 2006-2007 from 170 VA primary care clinics. We examined four user-centered design practices: 1) pilot testing, 2) provider satisfaction assessment, 3) formal usability assessment, and 4) analysis of impact on performance improvement. We used a regression model to evaluate the association between user-centered design practices and the perceived utility of clinical decision support, while accounting for other important factors at those clinics, including implementation climate, available resources, and structural characteristics. We also examined associations separately at community-based clinics and at hospital-based clinics. User-centered design practices for clinical decision support varied across clinics: 74% conducted pilot testing, 62% conducted provider satisfaction assessment, 36% conducted a formal usability assessment, and 79% conducted an analysis of impact on performance improvement. Overall perceived utility of clinical decision support was high, with a mean rating of 4.17 (±.67) out of 5 on a composite measure. "Analysis of impact on performance

  19. Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree

    Science.gov (United States)

    Kim, Jong Kyu; Kim, Nam Soo

    In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.

  20. Improved TOPSIS decision model for NPP emergencies

    International Nuclear Information System (INIS)

    Zhang Jin; Liu Feng; Huang Lian

    2011-01-01

    In this paper,an improved decision model is developed for its use as a tool to respond to emergencies at nuclear power plants. Given the complexity of multi-attribute emergency decision-making on nuclear accident, the improved TOPSIS method is used to build a decision-making model that integrates subjective weight and objective weight of each evaluation index. A comparison between the results of this new model and two traditional methods of fuzzy hierarchy analysis method and weighted analysis method demonstrates that the improved TOPSIS model has a better evaluation effect. (authors)

  1. Towards the integration, annotation and association of historical microarray experiments with RNA-seq.

    Science.gov (United States)

    Chavan, Shweta S; Bauer, Michael A; Peterson, Erich A; Heuck, Christoph J; Johann, Donald J

    2013-01-01

    Transcriptome analysis by microarrays has produced important advances in biomedicine. For instance in multiple myeloma (MM), microarray approaches led to the development of an effective disease subtyping via cluster assignment, and a 70 gene risk score. Both enabled an improved molecular understanding of MM, and have provided prognostic information for the purposes of clinical management. Many researchers are now transitioning to Next Generation Sequencing (NGS) approaches and RNA-seq in particular, due to its discovery-based nature, improved sensitivity, and dynamic range. Additionally, RNA-seq allows for the analysis of gene isoforms, splice variants, and novel gene fusions. Given the voluminous amounts of historical microarray data, there is now a need to associate and integrate microarray and RNA-seq data via advanced bioinformatic approaches. Custom software was developed following a model-view-controller (MVC) approach to integrate Affymetrix probe set-IDs, and gene annotation information from a variety of sources. The tool/approach employs an assortment of strategies to integrate, cross reference, and associate microarray and RNA-seq datasets. Output from a variety of transcriptome reconstruction and quantitation tools (e.g., Cufflinks) can be directly integrated, and/or associated with Affymetrix probe set data, as well as necessary gene identifiers and/or symbols from a diversity of sources. Strategies are employed to maximize the annotation and cross referencing process. Custom gene sets (e.g., MM 70 risk score (GEP-70)) can be specified, and the tool can be directly assimilated into an RNA-seq pipeline. A novel bioinformatic approach to aid in the facilitation of both annotation and association of historic microarray data, in conjunction with richer RNA-seq data, is now assisting with the study of MM cancer biology.

  2. Robust embryo identification using first polar body single nucleotide polymorphism microarray-based DNA fingerprinting.

    Science.gov (United States)

    Treff, Nathan R; Su, Jing; Kasabwala, Natasha; Tao, Xin; Miller, Kathleen A; Scott, Richard T

    2010-05-01

    This study sought to validate a novel, minimally invasive system for embryo tracking by single nucleotide polymorphism microarray-based DNA fingerprinting of the first polar body. First polar body-based assignments of which embryos implanted and were delivered after multiple ET were 100% consistent with previously validated embryo DNA fingerprinting-based assignments. Copyright 2010 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  3. Application of improved topsis method to accident emergency decision-making at nuclear power station

    International Nuclear Information System (INIS)

    Zhang Jin; Cai Qi; Zhang Fan; Chang Ling

    2009-01-01

    Given the complexity in multi-attribute decision-making on nuclear accident emergency, and by integrating subjective weight and impersonal weight of each evaluating index, a decision-making model for emergency plan at nuclear power stations is established with the application of improved TOPSIS model. The testing results indicated that the improved TOPSIS-based multi-attribute decision-making has a better assessment results. (authors)

  4. A cell spot microarray method for production of high density siRNA transfection microarrays

    Directory of Open Access Journals (Sweden)

    Mpindi John-Patrick

    2011-03-01

    Full Text Available Abstract Background High-throughput RNAi screening is widely applied in biological research, but remains expensive, infrastructure-intensive and conversion of many assays to HTS applications in microplate format is not feasible. Results Here, we describe the optimization of a miniaturized cell spot microarray (CSMA method, which facilitates utilization of the transfection microarray technique for disparate RNAi analyses. To promote rapid adaptation of the method, the concept has been tested with a panel of 92 adherent cell types, including primary human cells. We demonstrate the method in the systematic screening of 492 GPCR coding genes for impact on growth and survival of cultured human prostate cancer cells. Conclusions The CSMA method facilitates reproducible preparation of highly parallel cell microarrays for large-scale gene knockdown analyses. This will be critical towards expanding the cell based functional genetic screens to include more RNAi constructs, allow combinatorial RNAi analyses, multi-parametric phenotypic readouts or comparative analysis of many different cell types.

  5. Simulation of microarray data with realistic characteristics

    Directory of Open Access Journals (Sweden)

    Lehmussola Antti

    2006-07-01

    Full Text Available Abstract Background Microarray technologies have become common tools in biological research. As a result, a need for effective computational methods for data analysis has emerged. Numerous different algorithms have been proposed for analyzing the data. However, an objective evaluation of the proposed algorithms is not possible due to the lack of biological ground truth information. To overcome this fundamental problem, the use of simulated microarray data for algorithm validation has been proposed. Results We present a microarray simulation model which can be used to validate different kinds of data analysis algorithms. The proposed model is unique in the sense that it includes all the steps that affect the quality of real microarray data. These steps include the simulation of biological ground truth data, applying biological and measurement technology specific error models, and finally simulating the microarray slide manufacturing and hybridization. After all these steps are taken into account, the simulated data has realistic biological and statistical characteristics. The applicability of the proposed model is demonstrated by several examples. Conclusion The proposed microarray simulation model is modular and can be used in different kinds of applications. It includes several error models that have been proposed earlier and it can be used with different types of input data. The model can be used to simulate both spotted two-channel and oligonucleotide based single-channel microarrays. All this makes the model a valuable tool for example in validation of data analysis algorithms.

  6. Development and application of a microarray meter tool to optimize microarray experiments

    Directory of Open Access Journals (Sweden)

    Rouse Richard JD

    2008-07-01

    Full Text Available Abstract Background Successful microarray experimentation requires a complex interplay between the slide chemistry, the printing pins, the nucleic acid probes and targets, and the hybridization milieu. Optimization of these parameters and a careful evaluation of emerging slide chemistries are a prerequisite to any large scale array fabrication effort. We have developed a 'microarray meter' tool which assesses the inherent variations associated with microarray measurement prior to embarking on large scale projects. Findings The microarray meter consists of nucleic acid targets (reference and dynamic range control and probe components. Different plate designs containing identical probe material were formulated to accommodate different robotic and pin designs. We examined the variability in probe quality and quantity (as judged by the amount of DNA printed and remaining post-hybridization using three robots equipped with capillary printing pins. Discussion The generation of microarray data with minimal variation requires consistent quality control of the (DNA microarray manufacturing and experimental processes. Spot reproducibility is a measure primarily of the variations associated with printing. The microarray meter assesses array quality by measuring the DNA content for every feature. It provides a post-hybridization analysis of array quality by scoring probe performance using three metrics, a a measure of variability in the signal intensities, b a measure of the signal dynamic range and c a measure of variability of the spot morphologies.

  7. Microarray-Based Identification of Transcription Factor Target Genes

    NARCIS (Netherlands)

    Gorte, M.; Horstman, A.; Page, R.B.; Heidstra, R.; Stromberg, A.; Boutilier, K.A.

    2011-01-01

    Microarray analysis is widely used to identify transcriptional changes associated with genetic perturbation or signaling events. Here we describe its application in the identification of plant transcription factor target genes with emphasis on the design of suitable DNA constructs for controlling TF

  8. Microarray-based DNA methylation study of Ewing's sarcoma of the bone.

    Science.gov (United States)

    Park, Hye-Rim; Jung, Woon-Won; Kim, Hyun-Sook; Park, Yong-Koo

    2014-10-01

    Alterations in DNA methylation patterns are a hallmark of malignancy. However, the majority of epigenetic studies of Ewing's sarcoma have focused on the analysis of only a few candidate genes. Comprehensive studies are thus lacking and are required. The aim of the present study was to identify novel methylation markers in Ewing's sarcoma using microarray analysis. The current study reports the microarray-based DNA methylation study of 1,505 CpG sites of 807 cancer-related genes from 69 Ewing's sarcoma samples. The Illumina GoldenGate Methylation Cancer Panel I microarray was used, and with the appropriate controls (n=14), a total of 92 hypermethylated genes were identified in the Ewing's sarcoma samples. The majority of the hypermethylated genes were associated with cell adhesion, cell regulation, development and signal transduction. The overall methylation mean values were compared between patients who survived and those that did not. The overall methylation mean was significantly higher in the patients who did not survive (0.25±0.03) than in those who did (0.22±0.05) (P=0.0322). However, the overall methylation mean was not found to significantly correlate with age, gender or tumor location. GDF10 , OSM , APC and HOXA11 were the most significant differentially-methylated genes, however, their methylation levels were not found to significantly correlate with the survival rate. The DNA methylation profile of Ewing's sarcoma was characterized and 92 genes that were significantly hypermethylated were detected. A trend towards a more aggressive behavior was identified in the methylated group. The results of this study indicated that methylation may be significant in the development of Ewing's sarcoma.

  9. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    Science.gov (United States)

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  10. Correction of technical bias in clinical microarray data improves concordance with known biological information

    DEFF Research Database (Denmark)

    Eklund, Aron Charles; Szallasi, Zoltan Imre

    2008-01-01

    The performance of gene expression microarrays has been well characterized using controlled reference samples, but the performance on clinical samples remains less clear. We identified sources of technical bias affecting many genes in concert, thus causing spurious correlations in clinical data...... sets and false associations between genes and clinical variables. We developed a method to correct for technical bias in clinical microarray data, which increased concordance with known biological relationships in multiple data sets....

  11. Development and application of an antibody-based protein microarray to assess physiological stress in grizzly bears (Ursus arctos).

    Science.gov (United States)

    Carlson, Ruth I; Cattet, Marc R L; Sarauer, Bryan L; Nielsen, Scott E; Boulanger, John; Stenhouse, Gordon B; Janz, David M

    2016-01-01

    A novel antibody-based protein microarray was developed that simultaneously determines expression of 31 stress-associated proteins in skin samples collected from free-ranging grizzly bears (Ursus arctos) in Alberta, Canada. The microarray determines proteins belonging to four broad functional categories associated with stress physiology: hypothalamic-pituitary-adrenal axis proteins, apoptosis/cell cycle proteins, cellular stress/proteotoxicity proteins and oxidative stress/inflammation proteins. Small skin samples (50-100 mg) were collected from captured bears using biopsy punches. Proteins were isolated and labelled with fluorescent dyes, with labelled protein homogenates loaded onto microarrays to hybridize with antibodies. Relative protein expression was determined by comparison with a pooled standard skin sample. The assay was sensitive, requiring 80 µg of protein per sample to be run in triplicate on the microarray. Intra-array and inter-array coefficients of variation for individual proteins were generally bears. This suggests that remotely delivered biopsy darts could be used in future sampling. Using generalized linear mixed models, certain proteins within each functional category demonstrated altered expression with respect to differences in year, season, geographical sampling location within Alberta and bear biological parameters, suggesting that these general variables may influence expression of specific proteins in the microarray. Our goal is to apply the protein microarray as a conservation physiology tool that can detect, evaluate and monitor physiological stress in grizzly bears and other species at risk over time in response to environmental change.

  12. A Training Method to Improve Police Use of Force Decision Making

    Directory of Open Access Journals (Sweden)

    Judith P. Andersen

    2016-04-01

    Full Text Available Police safety and use of force decisions during critical incidents are an ongoing source of concern for both police practitioners and the public. Prior research in the area of police performance reveals that psychological and physiological stress responses during critical incidents can shape the outcome of the incident, either positively or negatively. The goal of this study was to test a training method to improve use of force decision making among police. This randomized controlled pilot study consisted of training officers to apply techniques to enhance psychological and physiological control during stressful critical incidents. Of a pool of 80 police officers, potential participants were invited based on equivalent age, years of experience, physiological characteristics (i.e., body mass index [BMI] and cardiovascular reactivity, and expertise. Results revealed that the intervention group displayed significantly better physiological control, situational awareness, and overall performance, and made a greater number of correct use of force decisions than officers in the control group (all ps < .01. The relevant improvements in use of force decision-making found in this pilot study indicate that this training method warrants further investigation. Improved use of force decision making directly translates into potential lifesaving decisions for police and the civilians they are working with.

  13. Single-cell multiple gene expression analysis based on single-molecule-detection microarray assay for multi-DNA determination

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lu [School of Chemistry and Chemical Engineering, Shandong University, Jinan 250100 (China); Wang, Xianwei [School of Life Sciences, Shandong University, Jinan 250100 (China); Zhang, Xiaoli [School of Chemistry and Chemical Engineering, Shandong University, Jinan 250100 (China); Wang, Jinxing [School of Life Sciences, Shandong University, Jinan 250100 (China); Jin, Wenrui, E-mail: jwr@sdu.edu.cn [School of Chemistry and Chemical Engineering, Shandong University, Jinan 250100 (China)

    2015-01-07

    Highlights: • A single-molecule-detection (SMD) microarray for 10 samples is fabricated. • The based-SMD microarray assay (SMA) can determine 8 DNAs for each sample. • The limit of detection of SMA is as low as 1.3 × 10{sup −16} mol L{sup −1}. • The SMA can be applied in single-cell multiple gene expression analysis. - Abstract: We report a novel ultra-sensitive and high-selective single-molecule-detection microarray assay (SMA) for multiple DNA determination. In the SMA, a capture DNA (DNAc) microarray consisting of 10 subarrays with 9 spots for each subarray is fabricated on a silanized glass coverslip as the substrate. On the subarrays, the spot-to-spot spacing is 500 μm and each spot has a diameter of ∼300 μm. The sequence of the DNAcs on the 9 spots of a subarray is different, to determine 8 types of target DNAs (DNAts). Thus, 8 types of DNAts are captured to their complementary DNAcs at 8 spots of a subarray, respectively, and then labeled with quantum dots (QDs) attached to 8 types of detection DNAs (DNAds) with different sequences. The ninth spot is used to detect the blank value. In order to determine the same 8 types of DNAts in 10 samples, the 10 DNAc-modified subarrays on the microarray are identical. Fluorescence single-molecule images of the QD-labeled DNAts on each spot of the subarray are acquired using a home-made single-molecule microarray reader. The amounts of the DNAts are quantified by counting the bright dots from the QDs. For a microarray, 8 types of DNAts in 10 samples can be quantified in parallel. The limit of detection of the SMA for DNA determination is as low as 1.3 × 10{sup −16} mol L{sup −1}. The SMA for multi-DNA determination can also be applied in single-cell multiple gene expression analysis through quantification of complementary DNAs (cDNAs) corresponding to multiple messenger RNAs (mRNAs) in single cells. To do so, total RNA in single cells is extracted and reversely transcribed into their cDNAs. Three

  14. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support.

    Science.gov (United States)

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    %, and 20% of missing values. This expansion in the KB coverage allowed solving complex disease diagnostic queries that were previously unresolvable, without losing the correctness of the answers. However, compared to deductive reasoning, data-intensive plausible reasoning mechanisms yield a significant performance overhead. We observed that plausible reasoning approaches, by generating tentative inferences and leveraging domain knowledge of experts, allow us to extend the coverage of medical knowledge bases, resulting in improved clinical decision support. Second, by leveraging OWL ontological knowledge, we are able to increase the expressivity and accuracy of plausible reasoning methods. Third, our approach is applicable to clinical decision support systems for a range of chronic diseases.

  15. An Intelligent Fleet Condition-Based Maintenance Decision Making Method Based on Multi-Agent

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2012-01-01

    Full Text Available According to the demand for condition-based maintenance online decision making among a mission oriented fleet, an intelligent maintenance decision making method based on Multi-agent and heuristic rules is proposed. The process of condition-based maintenance within an aircraft fleet (each containing one or more Line Replaceable Modules based on multiple maintenance thresholds is analyzed. Then the process is abstracted into a Multi-Agent Model, a 2-layer model structure containing host negotiation and independent negotiation is established, and the heuristic rules applied to global and local maintenance decision making is proposed. Based on Contract Net Protocol and the heuristic rules, the maintenance decision making algorithm is put forward. Finally, a fleet consisting of 10 aircrafts on a 3-wave continuous mission is illustrated to verify this method. Simulation results indicate that this method can improve the availability of the fleet, meet mission demands, rationalize the utilization of support resources and provide support for online maintenance decision making among a mission oriented fleet.

  16. Feasibility of web-based decision aids in neurological patients

    NARCIS (Netherlands)

    van Til, Janine Astrid; Drossaert, Constance H.C.; Renzenbrink, Gerbert J.; Snoek, Govert J.; Dijkstra, Evelien; Stiggelbout, Anne M.; IJzerman, Maarten Joost

    2010-01-01

    Decision aids (DAs) may be helpful in improving patients' participation in medical decision-making. We investigated the potential for web-based DAs in a rehabilitation population. Two self-administered DAs focused on the treatment of acquired ankle-foot impairment in stroke and the treatment of

  17. An Intelligent Clinical Decision Support System for Patient-Specific Predictions to Improve Cervical Intraepithelial Neoplasia Detection

    Directory of Open Access Journals (Sweden)

    Panagiotis Bountris

    2014-01-01

    Full Text Available Nowadays, there are molecular biology techniques providing information related to cervical cancer and its cause: the human Papillomavirus (HPV, including DNA microarrays identifying HPV subtypes, mRNA techniques such as nucleic acid based amplification or flow cytometry identifying E6/E7 oncogenes, and immunocytochemistry techniques such as overexpression of p16. Each one of these techniques has its own performance, limitations and advantages, thus a combinatorial approach via computational intelligence methods could exploit the benefits of each method and produce more accurate results. In this article we propose a clinical decision support system (CDSS, composed by artificial neural networks, intelligently combining the results of classic and ancillary techniques for diagnostic accuracy improvement. We evaluated this method on 740 cases with complete series of cytological assessment, molecular tests, and colposcopy examination. The CDSS demonstrated high sensitivity (89.4%, high specificity (97.1%, high positive predictive value (89.4%, and high negative predictive value (97.1%, for detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+. In comparison to the tests involved in this study and their combinations, the CDSS produced the most balanced results in terms of sensitivity, specificity, PPV, and NPV. The proposed system may reduce the referral rate for colposcopy and guide personalised management and therapeutic interventions.

  18. An intelligent clinical decision support system for patient-specific predictions to improve cervical intraepithelial neoplasia detection.

    Science.gov (United States)

    Bountris, Panagiotis; Haritou, Maria; Pouliakis, Abraham; Margari, Niki; Kyrgiou, Maria; Spathis, Aris; Pappas, Asimakis; Panayiotides, Ioannis; Paraskevaidis, Evangelos A; Karakitsos, Petros; Koutsouris, Dimitrios-Dionyssios

    2014-01-01

    Nowadays, there are molecular biology techniques providing information related to cervical cancer and its cause: the human Papillomavirus (HPV), including DNA microarrays identifying HPV subtypes, mRNA techniques such as nucleic acid based amplification or flow cytometry identifying E6/E7 oncogenes, and immunocytochemistry techniques such as overexpression of p16. Each one of these techniques has its own performance, limitations and advantages, thus a combinatorial approach via computational intelligence methods could exploit the benefits of each method and produce more accurate results. In this article we propose a clinical decision support system (CDSS), composed by artificial neural networks, intelligently combining the results of classic and ancillary techniques for diagnostic accuracy improvement. We evaluated this method on 740 cases with complete series of cytological assessment, molecular tests, and colposcopy examination. The CDSS demonstrated high sensitivity (89.4%), high specificity (97.1%), high positive predictive value (89.4%), and high negative predictive value (97.1%), for detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+). In comparison to the tests involved in this study and their combinations, the CDSS produced the most balanced results in terms of sensitivity, specificity, PPV, and NPV. The proposed system may reduce the referral rate for colposcopy and guide personalised management and therapeutic interventions.

  19. Investigation of Parameters that Affect the Success Rate of Microarray-Based Allele-Specific Hybridization Assays

    DEFF Research Database (Denmark)

    Poulsen, Lena; Søe, Martin Jensen; Moller, Lisbeth Birk

    2011-01-01

    Background: The development of microarray-based genetic tests for diseases that are caused by known mutations is becoming increasingly important. The key obstacle to developing functional genotyping assays is that such mutations need to be genotyped regardless of their location in genomic regions...

  20. Interval-Valued Hesitant Fuzzy Multiattribute Group Decision Making Based on Improved Hamacher Aggregation Operators and Continuous Entropy

    Directory of Open Access Journals (Sweden)

    Jun Liu

    2017-01-01

    Full Text Available Under the interval-valued hesitant fuzzy information environment, we investigate a multiattribute group decision making (MAGDM method with continuous entropy weights and improved Hamacher information aggregation operators. Firstly, we introduce the axiomatic definition of entropy for interval-valued hesitant fuzzy elements (IVHFEs and construct a continuous entropy formula on the basis of the continuous ordered weighted averaging (COWA operator. Then, based on the Hamacher t-norm and t-conorm, the adjusted operational laws for IVHFEs are defined. In order to aggregate interval-valued hesitant fuzzy information, some new improved interval-valued hesitant fuzzy Hamacher aggregation operators are investigated, including the improved interval-valued hesitant fuzzy Hamacher ordered weighted averaging (I-IVHFHOWA operator and the improved interval-valued hesitant fuzzy Hamacher ordered weighted geometric (I-IVHFHOWG operator, the desirable properties of which are discussed. In addition, the relationship among these proposed operators is analyzed in detail. Applying the continuous entropy and the proposed operators, an approach to MAGDM is developed. Finally, a numerical example for emergency operating center (EOC selection is provided, and comparative analyses with existing methods are performed to demonstrate that the proposed approach is both valid and practical to deal with group decision making problems.

  1. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  2. Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses

    Science.gov (United States)

    Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn

    2016-01-01

    Background Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. Objective The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Methods Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Results Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however

  3. Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses.

    Science.gov (United States)

    Syrowatka, Ania; Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn

    2016-01-26

    Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than

  4. Generalization of DNA microarray dispersion properties: microarray equivalent of t-distribution

    DEFF Research Database (Denmark)

    Novak, Jaroslav P; Kim, Seon-Young; Xu, Jun

    2006-01-01

    BACKGROUND: DNA microarrays are a powerful technology that can provide a wealth of gene expression data for disease studies, drug development, and a wide scope of other investigations. Because of the large volume and inherent variability of DNA microarray data, many new statistical methods have...

  5. Gene selection and classification for cancer microarray data based on machine learning and similarity measures

    Directory of Open Access Journals (Sweden)

    Liu Qingzhong

    2011-12-01

    Full Text Available Abstract Background Microarray data have a high dimension of variables and a small sample size. In microarray data analyses, two important issues are how to choose genes, which provide reliable and good prediction for disease status, and how to determine the final gene set that is best for classification. Associations among genetic markers mean one can exploit information redundancy to potentially reduce classification cost in terms of time and money. Results To deal with redundant information and improve classification, we propose a gene selection method, Recursive Feature Addition, which combines supervised learning and statistical similarity measures. To determine the final optimal gene set for prediction and classification, we propose an algorithm, Lagging Prediction Peephole Optimization. By using six benchmark microarray gene expression data sets, we compared Recursive Feature Addition with recently developed gene selection methods: Support Vector Machine Recursive Feature Elimination, Leave-One-Out Calculation Sequential Forward Selection and several others. Conclusions On average, with the use of popular learning machines including Nearest Mean Scaled Classifier, Support Vector Machine, Naive Bayes Classifier and Random Forest, Recursive Feature Addition outperformed other methods. Our studies also showed that Lagging Prediction Peephole Optimization is superior to random strategy; Recursive Feature Addition with Lagging Prediction Peephole Optimization obtained better testing accuracies than the gene selection method varSelRF.

  6. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  7. Comparing transformation methods for DNA microarray data

    NARCIS (Netherlands)

    Thygesen, Helene H.; Zwinderman, Aeilko H.

    2004-01-01

    Background: When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include

  8. An algorithm for finding biologically significant features in microarray data based on a priori manifold learning.

    Directory of Open Access Journals (Sweden)

    Zena M Hira

    Full Text Available Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.

  9. In silico design and performance of peptide microarrays for breast cancer tumour-auto-antibody testing

    Directory of Open Access Journals (Sweden)

    Andreas Weinhäusel

    2012-06-01

    Full Text Available The simplicity and potential of minimally invasive testing using sera from patients makes auto-antibody based biomarkers a very promising tool for use in cancer diagnostics. Protein microarrays have been used for the identification of such auto-antibody signatures. Because high throughput protein expression and purification is laborious, synthetic peptides might be a good alternative for microarray generation and multiplexed analyses. In this study, we designed 1185 antigenic peptides, deduced from proteins expressed by 642 cDNA expression clones found to be sero-reactive in both breast tumour patients and controls. The sero-reactive proteins and the corresponding peptides were used for the production of protein and peptide microarrays. Serum samples from females with benign and malignant breast tumours and healthy control sera (n=16 per group were then analysed. Correct classification of the serum samples on peptide microarrays were 78% for discrimination of ‘malignant versus healthy controls’, 72% for ‘benign versus malignant’ and 94% for ‘benign versus controls’. On protein arrays, correct classification for these contrasts was 69%, 59% and 59%, respectively. The over-representation analysis of the classifiers derived from class prediction showed enrichment of genes associated with ribosomes, spliceosomes, endocytosis and the pentose phosphate pathway. Sequence analyses of the peptides with the highest sero-reactivity demonstrated enrichment of the zinc-finger domain. Peptides’ sero-reactivities were found negatively correlated with hydrophobicity and positively correlated with positive charge, high inter-residue protein contact energies and a secondary structure propensity bias. This study hints at the possibility of using in silico designed antigenic peptide microarrays as an alternative to protein microarrays for the improvement of tumour auto-antibody based diagnostics.

  10. Fibre optic microarrays.

    Science.gov (United States)

    Walt, David R

    2010-01-01

    This tutorial review describes how fibre optic microarrays can be used to create a variety of sensing and measurement systems. This review covers the basics of optical fibres and arrays, the different microarray architectures, and describes a multitude of applications. Such arrays enable multiplexed sensing for a variety of analytes including nucleic acids, vapours, and biomolecules. Polymer-coated fibre arrays can be used for measuring microscopic chemical phenomena, such as corrosion and localized release of biochemicals from cells. In addition, these microarrays can serve as a substrate for fundamental studies of single molecules and single cells. The review covers topics of interest to chemists, biologists, materials scientists, and engineers.

  11. Multiplex Detection and Genotyping of Point Mutations Involved in Charcot-Marie-Tooth Disease Using a Hairpin Microarray-Based Assay

    Directory of Open Access Journals (Sweden)

    Yasser Baaj

    2009-01-01

    Full Text Available We previously developed a highly specific method for detecting SNPs with a microarray-based system using stem-loop probes. In this paper we demonstrate that coupling a multiplexing procedure with our microarray method is possible for the simultaneous detection and genotyping of four point mutations, in three different genes, involved in Charcot-Marie-Tooth disease. DNA from healthy individuals and patients was amplified, labeled with Cy3 by multiplex PCR; and hybridized to microarrays. Spot signal intensities were 18 to 74 times greater for perfect matches than for mismatched target sequences differing by a single nucleotide (discrimination ratio for “homozygous” DNA from healthy individuals. “Heterozygous” mutant DNA samples gave signal intensity ratios close to 1 at the positions of the mutations as expected. Genotyping by this method was therefore reliable. This system now combines the principle of highly specific genotyping based on stem-loop structure probes with the advantages of multiplex analysis.

  12. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  13. Design of a new therapy for patients with chronic kidney disease: use of microarrays for selective hemoadsorption of uremic wastes and toxins to improve homeostasis

    Directory of Open Access Journals (Sweden)

    Shahidi Bonjar MR

    2015-01-01

    Full Text Available Mohammad Rashid Shahidi Bonjar,1 Leyla Shahidi Bonjar2 1School of Dentistry, Kerman University of Medical Sciences, Kerman, Iran; 2Department of Pharmacology, College of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran Abstract: The hypothesis proposed here would provide near to optimum homeostasis for patients with chronic kidney disease (CKD without the need for hemodialysis. This strategy has not been described previously in the scientific literature. It involves a targeted therapy that may prevent progression of the disease and help to improve the well-being of CKD patients. It proposes a nanotechnological device, ie, a microarray-oriented homeostasis provider (MOHP, to improve homeostasis in CKD patients. MOHP would be an auxiliary kidney aid, and would improve the filtration functions that impaired kidneys cannot perform by their own. MOHP is composed of two main computer-oriented components, ie, a quantitative microarray detector (QMD and a homeostasis-oriented microarray column (HOMC. QMD detects and HOMC selectively removes defined quantities of uremic wastes, toxins and any other metabolites which is programmed for. The QMD and HOMC would accomplish this with the help of a peristaltic blood pump that would circulate blood aseptically in an extracorporeal closed circuit. During the passage of blood through the QMD, this microarray detector would quantitatively monitor all of the blood compounds that accumulate in the blood of a patient with impaired glomerular filtration, including small-sized, middle-sized and large-sized molecules. The electronic information collected by QMD would be electronically transmitted to the HOMC, which would adjust the molecules to the concentrations they are electronically programmed for and/or receive from QMD. This process of monitoring and removal of waste continues until the programmed homeostasis criteria are reached. Like a conventional kidney machine, MOHP can be used in hospitals and

  14. Prediction of transcriptional regulatory elements for plant hormone responses based on microarray data

    Directory of Open Access Journals (Sweden)

    Yamaguchi-Shinozaki Kazuko

    2011-02-01

    Full Text Available Abstract Background Phytohormones organize plant development and environmental adaptation through cell-to-cell signal transduction, and their action involves transcriptional activation. Recent international efforts to establish and maintain public databases of Arabidopsis microarray data have enabled the utilization of this data in the analysis of various phytohormone responses, providing genome-wide identification of promoters targeted by phytohormones. Results We utilized such microarray data for prediction of cis-regulatory elements with an octamer-based approach. Our test prediction of a drought-responsive RD29A promoter with the aid of microarray data for response to drought, ABA and overexpression of DREB1A, a key regulator of cold and drought response, provided reasonable results that fit with the experimentally identified regulatory elements. With this succession, we expanded the prediction to various phytohormone responses, including those for abscisic acid, auxin, cytokinin, ethylene, brassinosteroid, jasmonic acid, and salicylic acid, as well as for hydrogen peroxide, drought and DREB1A overexpression. Totally 622 promoters that are activated by phytohormones were subjected to the prediction. In addition, we have assigned putative functions to 53 octamers of the Regulatory Element Group (REG that have been extracted as position-dependent cis-regulatory elements with the aid of their feature of preferential appearance in the promoter region. Conclusions Our prediction of Arabidopsis cis-regulatory elements for phytohormone responses provides guidance for experimental analysis of promoters to reveal the basis of the transcriptional network of phytohormone responses.

  15. GeneRank: Using search engine technology for the analysis of microarray experiments

    Directory of Open Access Journals (Sweden)

    Breitling Rainer

    2005-09-01

    Full Text Available Abstract Background Interpretation of simple microarray experiments is usually based on the fold-change of gene expression between a reference and a "treated" sample where the treatment can be of many types from drug exposure to genetic variation. Interpretation of the results usually combines lists of differentially expressed genes with previous knowledge about their biological function. Here we evaluate a method – based on the PageRank algorithm employed by the popular search engine Google – that tries to automate some of this procedure to generate prioritized gene lists by exploiting biological background information. Results GeneRank is an intuitive modification of PageRank that maintains many of its mathematical properties. It combines gene expression information with a network structure derived from gene annotations (gene ontologies or expression profile correlations. Using both simulated and real data we find that the algorithm offers an improved ranking of genes compared to pure expression change rankings. Conclusion Our modification of the PageRank algorithm provides an alternative method of evaluating microarray experimental results which combines prior knowledge about the underlying network. GeneRank offers an improvement compared to assessing the importance of a gene based on its experimentally observed fold-change alone and may be used as a basis for further analytical developments.

  16. GeneRank: using search engine technology for the analysis of microarray experiments.

    Science.gov (United States)

    Morrison, Julie L; Breitling, Rainer; Higham, Desmond J; Gilbert, David R

    2005-09-21

    Interpretation of simple microarray experiments is usually based on the fold-change of gene expression between a reference and a "treated" sample where the treatment can be of many types from drug exposure to genetic variation. Interpretation of the results usually combines lists of differentially expressed genes with previous knowledge about their biological function. Here we evaluate a method--based on the PageRank algorithm employed by the popular search engine Google--that tries to automate some of this procedure to generate prioritized gene lists by exploiting biological background information. GeneRank is an intuitive modification of PageRank that maintains many of its mathematical properties. It combines gene expression information with a network structure derived from gene annotations (gene ontologies) or expression profile correlations. Using both simulated and real data we find that the algorithm offers an improved ranking of genes compared to pure expression change rankings. Our modification of the PageRank algorithm provides an alternative method of evaluating microarray experimental results which combines prior knowledge about the underlying network. GeneRank offers an improvement compared to assessing the importance of a gene based on its experimentally observed fold-change alone and may be used as a basis for further analytical developments.

  17. Hybridization chain reaction amplification for highly sensitive fluorescence detection of DNA with dextran coated microarrays.

    Science.gov (United States)

    Chao, Jie; Li, Zhenhua; Li, Jing; Peng, Hongzhen; Su, Shao; Li, Qian; Zhu, Changfeng; Zuo, Xiaolei; Song, Shiping; Wang, Lianhui; Wang, Lihua

    2016-07-15

    Microarrays of biomolecules hold great promise in the fields of genomics, proteomics, and clinical assays on account of their remarkably parallel and high-throughput assay capability. However, the fluorescence detection used in most conventional DNA microarrays is still limited by sensitivity. In this study, we have demonstrated a novel universal and highly sensitive platform for fluorescent detection of sequence specific DNA at the femtomolar level by combining dextran-coated microarrays with hybridization chain reaction (HCR) signal amplification. Three-dimensional dextran matrix was covalently coated on glass surface as the scaffold to immobilize DNA recognition probes to increase the surface binding capacity and accessibility. DNA nanowire tentacles were formed on the matrix surface for efficient signal amplification by capturing multiple fluorescent molecules in a highly ordered way. By quantifying microscopic fluorescent signals, the synergetic effects of dextran and HCR greatly improved sensitivity of DNA microarrays, with a detection limit of 10fM (1×10(5) molecules). This detection assay could recognize one-base mismatch with fluorescence signals dropped down to ~20%. This cost-effective microarray platform also worked well with samples in serum and thus shows great potential for clinical diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Design of a new therapy for patients with chronic kidney disease: use of microarrays for selective hemoadsorption of uremic wastes and toxins to improve homeostasis.

    Science.gov (United States)

    Shahidi Bonjar, Mohammad Rashid; Shahidi Bonjar, Leyla

    2015-01-01

    The hypothesis proposed here would provide near to optimum homeostasis for patients with chronic kidney disease (CKD) without the need for hemodialysis. This strategy has not been described previously in the scientific literature. It involves a targeted therapy that may prevent progression of the disease and help to improve the well-being of CKD patients. It proposes a nanotechnological device, ie, a microarray-oriented homeostasis provider (MOHP), to improve homeostasis in CKD patients. MOHP would be an auxiliary kidney aid, and would improve the filtration functions that impaired kidneys cannot perform by their own. MOHP is composed of two main computer-oriented components, ie, a quantitative microarray detector (QMD) and a homeostasis-oriented microarray column (HOMC). QMD detects and HOMC selectively removes defined quantities of uremic wastes, toxins and any other metabolites which is programmed for. The QMD and HOMC would accomplish this with the help of a peristaltic blood pump that would circulate blood aseptically in an extracorporeal closed circuit. During the passage of blood through the QMD, this microarray detector would quantitatively monitor all of the blood compounds that accumulate in the blood of a patient with impaired glomerular filtration, including small-sized, middle-sized and large-sized molecules. The electronic information collected by QMD would be electronically transmitted to the HOMC, which would adjust the molecules to the concentrations they are electronically programmed for and/or receive from QMD. This process of monitoring and removal of waste continues until the programmed homeostasis criteria are reached. Like a conventional kidney machine, MOHP can be used in hospitals and homes under the supervision of a trained technician. The main advantages of this treatment would include improved homeostasis, a reduced likelihood of side effects and of the morbidity resulting from CKD, slower progression of kidney impairment, prevention of

  19. Automation of information decision support to improve e-learning resources quality

    Directory of Open Access Journals (Sweden)

    A.L. Danchenko

    2013-06-01

    Full Text Available Purpose. In conditions of active development of e-learning the high quality of e-learning resources is very important. Providing the high quality of e-learning resources in situation with mass higher education and rapid obsolescence of information requires the automation of information decision support for improving the quality of e-learning resources by development of decision support system. Methodology. The problem is solved by methods of artificial intelligence. The knowledge base of information structure of decision support system that is based on frame model of knowledge representation and inference production rules are developed. Findings. According to the results of the analysis of life cycle processes and requirements to the e-learning resources quality the information model of the structure of the knowledge base of the decision support system, the inference rules for the automatically generating of recommendations and the software implementation are developed. Practical value. It is established that the basic requirements for quality are performance, validity, reliability and manufacturability. It is shown that the using of a software implementation of decision support system for researched courses gives a growth of the quality according to the complex quality criteria. The information structure of a knowledge base system to support decision-making and rules of inference can be used by methodologists and content developers of learning systems.

  20. Statistical Redundancy Testing for Improved Gene Selection in Cancer Classification Using Microarray Data

    Directory of Open Access Journals (Sweden)

    J. Sunil Rao

    2007-01-01

    Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.

  1. Cultural targeting and tailoring of shared decision making technology: a theoretical framework for improving the effectiveness of patient decision aids in culturally diverse groups.

    Science.gov (United States)

    Alden, Dana L; Friend, John; Schapira, Marilyn; Stiggelbout, Anne

    2014-03-01

    Patient decision aids are known to positively impact outcomes critical to shared decision making (SDM), such as gist knowledge and decision preparedness. However, research on the potential improvement of these and other important outcomes through cultural targeting and tailoring of decision aids is very limited. This is the case despite extensive evidence supporting use of cultural targeting and tailoring to improve the effectiveness of health communications. Building on prominent psychological theory, we propose a two-stage framework incorporating cultural concepts into the design process for screening and treatment decision aids. The first phase recommends use of cultural constructs, such as collectivism and individualism, to differentially target patients whose cultures are known to vary on these dimensions. Decision aid targeting is operationalized through use of symbols and values that appeal to members of the given culture. Content dimensions within decision aids that appear particularly appropriate for targeting include surface level visual characteristics, language, beliefs, attitudes and values. The second phase of the framework is based on evidence that individuals vary in terms of how strongly cultural norms influence their approach to problem solving and decision making. In particular, the framework hypothesizes that differences in terms of access to cultural mindsets (e.g., access to interdependent versus independent self) can be measured up front and used to tailor decision aids. Thus, the second phase in the framework emphasizes the importance of not only targeting decision aid content, but also tailoring the information to the individual based on measurement of how strongly he/she is connected to dominant cultural mindsets. Overall, the framework provides a theory-based guide for researchers and practitioners who are interested in using cultural targeting and tailoring to develop and test decision aids that move beyond a "one-size fits all" approach

  2. Tumour auto-antibody screening: performance of protein microarrays using SEREX derived antigens

    International Nuclear Information System (INIS)

    Stempfer, René; Weinhäusel, Andreas; Syed, Parvez; Vierlinger, Klemens; Pichler, Rudolf; Meese, Eckart; Leidinger, Petra; Ludwig, Nicole; Kriegner, Albert; Nöhammer, Christa

    2010-01-01

    The simplicity and potential of minimal invasive testing using serum from patients make auto-antibody based biomarkers a very promising tool for use in diagnostics of cancer and auto-immune disease. Although several methods exist for elucidating candidate-protein markers, immobilizing these onto membranes and generating so called macroarrays is of limited use for marker validation. Especially when several hundred samples have to be analysed, microarrays could serve as a good alternative since processing macro membranes is cumbersome and reproducibility of results is moderate. Candidate markers identified by SEREX (serological identification of antigens by recombinant expression cloning) screenings of brain and lung tumour were used for macroarray and microarray production. For microarray production recombinant proteins were expressed in E. coli by autoinduction and purified His-tag (histidine-tagged) proteins were then used for the production of protein microarrays. Protein arrays were hybridized with the serum samples from brain and lung tumour patients. Methods for the generation of microarrays were successfully established when using antigens derived from membrane-based selection. Signal patterns obtained by microarrays analysis of brain and lung tumour patients' sera were highly reproducible (R = 0.92-0.96). This provides the technical foundation for diagnostic applications on the basis of auto-antibody patterns. In this limited test set, the assay provided high reproducibility and a broad dynamic range to classify all brain and lung samples correctly. Protein microarray is an efficient means for auto-antibody-based detection when using SEREX-derived clones expressing antigenic proteins. Protein microarrays are preferred to macroarrays due to the easier handling and the high reproducibility of auto-antibody testing. Especially when using only a few microliters of patient samples protein microarrays are ideally suited for validation of auto

  3. Advanced spot quality analysis in two-colour microarray experiments

    Directory of Open Access Journals (Sweden)

    Vetter Guillaume

    2008-09-01

    Full Text Available Abstract Background Image analysis of microarrays and, in particular, spot quantification and spot quality control, is one of the most important steps in statistical analysis of microarray data. Recent methods of spot quality control are still in early age of development, often leading to underestimation of true positive microarray features and, consequently, to loss of important biological information. Therefore, improving and standardizing the statistical approaches of spot quality control are essential to facilitate the overall analysis of microarray data and subsequent extraction of biological information. Findings We evaluated the performance of two image analysis packages MAIA and GenePix (GP using two complementary experimental approaches with a focus on the statistical analysis of spot quality factors. First, we developed control microarrays with a priori known fluorescence ratios to verify the accuracy and precision of the ratio estimation of signal intensities. Next, we developed advanced semi-automatic protocols of spot quality evaluation in MAIA and GP and compared their performance with available facilities of spot quantitative filtering in GP. We evaluated these algorithms for standardised spot quality analysis in a whole-genome microarray experiment assessing well-characterised transcriptional modifications induced by the transcription regulator SNAI1. Using a set of RT-PCR or qRT-PCR validated microarray data, we found that the semi-automatic protocol of spot quality control we developed with MAIA allowed recovering approximately 13% more spots and 38% more differentially expressed genes (at FDR = 5% than GP with default spot filtering conditions. Conclusion Careful control of spot quality characteristics with advanced spot quality evaluation can significantly increase the amount of confident and accurate data resulting in more meaningful biological conclusions.

  4. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  5. A molecular beacon microarray based on a quantum dot label for detecting single nucleotide polymorphisms.

    Science.gov (United States)

    Guo, Qingsheng; Bai, Zhixiong; Liu, Yuqian; Sun, Qingjiang

    2016-03-15

    In this work, we report the application of streptavidin-coated quantum dot (strAV-QD) in molecular beacon (MB) microarray assays by using the strAV-QD to label the immobilized MB, avoiding target labeling and meanwhile obviating the use of amplification. The MBs are stem-loop structured oligodeoxynucleotides, modified with a thiol and a biotin at two terminals of the stem. With the strAV-QD labeling an "opened" MB rather than a "closed" MB via streptavidin-biotin reaction, a sensitive and specific detection of label-free target DNA sequence is demonstrated by the MB microarray, with a signal-to-background ratio of 8. The immobilized MBs can be perfectly regenerated, allowing the reuse of the microarray. The MB microarray also is able to detect single nucleotide polymorphisms, exhibiting genotype-dependent fluorescence signals. It is demonstrated that the MB microarray can perform as a 4-to-2 encoder, compressing the genotype information into two outputs. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Network Expansion and Pathway Enrichment Analysis towards Biologically Significant Findings from Microarrays

    Directory of Open Access Journals (Sweden)

    Wu Xiaogang

    2012-06-01

    Full Text Available In many cases, crucial genes show relatively slight changes between groups of samples (e.g. normal vs. disease, and many genes selected from microarray differential analysis by measuring the expression level statistically are also poorly annotated and lack of biological significance. In this paper, we present an innovative approach - network expansion and pathway enrichment analysis (NEPEA for integrative microarray analysis. We assume that organized knowledge will help microarray data analysis in significant ways, and the organized knowledge could be represented as molecular interaction networks or biological pathways. Based on this hypothesis, we develop the NEPEA framework based on network expansion from the human annotated and predicted protein interaction (HAPPI database, and pathway enrichment from the human pathway database (HPD. We use a recently-published microarray dataset (GSE24215 related to insulin resistance and type 2 diabetes (T2D as case study, since this study provided a thorough experimental validation for both genes and pathways identified computationally from classical microarray analysis and pathway analysis. We perform our NEPEA analysis for this dataset based on the results from the classical microarray analysis to identify biologically significant genes and pathways. Our findings are not only consistent with the original findings mostly, but also obtained more supports from other literatures.

  7. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Science.gov (United States)

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  8. Emerging use of gene expression microarrays in plant physiology.

    Science.gov (United States)

    Wullschleger, Stan D; Difazio, Stephen P

    2003-01-01

    Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology were selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.

  9. Seeded Bayesian Networks: Constructing genetic networks from microarray data

    Directory of Open Access Journals (Sweden)

    Quackenbush John

    2008-07-01

    Full Text Available Abstract Background DNA microarrays and other genomics-inspired technologies provide large datasets that often include hidden patterns of correlation between genes reflecting the complex processes that underlie cellular metabolism and physiology. The challenge in analyzing large-scale expression data has been to extract biologically meaningful inferences regarding these processes – often represented as networks – in an environment where the datasets are often imperfect and biological noise can obscure the actual signal. Although many techniques have been developed in an attempt to address these issues, to date their ability to extract meaningful and predictive network relationships has been limited. Here we describe a method that draws on prior information about gene-gene interactions to infer biologically relevant pathways from microarray data. Our approach consists of using preliminary networks derived from the literature and/or protein-protein interaction data as seeds for a Bayesian network analysis of microarray results. Results Through a bootstrap analysis of gene expression data derived from a number of leukemia studies, we demonstrate that seeded Bayesian Networks have the ability to identify high-confidence gene-gene interactions which can then be validated by comparison to other sources of pathway data. Conclusion The use of network seeds greatly improves the ability of Bayesian Network analysis to learn gene interaction networks from gene expression data. We demonstrate that the use of seeds derived from the biomedical literature or high-throughput protein-protein interaction data, or the combination, provides improvement over a standard Bayesian Network analysis, allowing networks involving dynamic processes to be deduced from the static snapshots of biological systems that represent the most common source of microarray data. Software implementing these methods has been included in the widely used TM4 microarray analysis package.

  10. Identifying Fishes through DNA Barcodes and Microarrays.

    Directory of Open Access Journals (Sweden)

    Marc Kochzius

    2010-09-01

    Full Text Available International fish trade reached an import value of 62.8 billion Euro in 2006, of which 44.6% are covered by the European Union. Species identification is a key problem throughout the life cycle of fishes: from eggs and larvae to adults in fisheries research and control, as well as processed fish products in consumer protection.This study aims to evaluate the applicability of the three mitochondrial genes 16S rRNA (16S, cytochrome b (cyt b, and cytochrome oxidase subunit I (COI for the identification of 50 European marine fish species by combining techniques of "DNA barcoding" and microarrays. In a DNA barcoding approach, neighbour Joining (NJ phylogenetic trees of 369 16S, 212 cyt b, and 447 COI sequences indicated that cyt b and COI are suitable for unambiguous identification, whereas 16S failed to discriminate closely related flatfish and gurnard species. In course of probe design for DNA microarray development, each of the markers yielded a high number of potentially species-specific probes in silico, although many of them were rejected based on microarray hybridisation experiments. None of the markers provided probes to discriminate the sibling flatfish and gurnard species. However, since 16S-probes were less negatively influenced by the "position of label" effect and showed the lowest rejection rate and the highest mean signal intensity, 16S is more suitable for DNA microarray probe design than cty b and COI. The large portion of rejected COI-probes after hybridisation experiments (>90% renders the DNA barcoding marker as rather unsuitable for this high-throughput technology.Based on these data, a DNA microarray containing 64 functional oligonucleotide probes for the identification of 30 out of the 50 fish species investigated was developed. It represents the next step towards an automated and easy-to-handle method to identify fish, ichthyoplankton, and fish products.

  11. Microarray-based genotyping of Salmonella: Inter-laboratory evaluation of reproducibility and standardization potential

    DEFF Research Database (Denmark)

    Grønlund, Hugo Ahlm; Riber, Leise; Vigre, Håkan

    2011-01-01

    Bacterial food-borne infections in humans caused by Salmonella spp. are considered a crucial food safety issue. Therefore, it is important for the risk assessments of Salmonella to consider the genomic variationamong different isolates in order to control pathogen-induced infections. Microarray...... critical methodology parameters that differed between the two labs were identified. These related to printing facilities, choice of hybridization buffer,wash buffers used following the hybridization and choice of procedure for purifying genomic DNA. Critical parameters were randomized in a four......DNA and different wash buffers. However, less agreement (Kappa=0.2–0.6) between microarray results were observed when using different hybridization buffers, indicating this parameter as being highly criticalwhen transferring a standard microarray assay between laboratories. In conclusion, this study indicates...

  12. Fast gene ontology based clustering for microarray experiments.

    Science.gov (United States)

    Ovaska, Kristian; Laakso, Marko; Hautaniemi, Sampsa

    2008-11-21

    Analysis of a microarray experiment often results in a list of hundreds of disease-associated genes. In order to suggest common biological processes and functions for these genes, Gene Ontology annotations with statistical testing are widely used. However, these analyses can produce a very large number of significantly altered biological processes. Thus, it is often challenging to interpret GO results and identify novel testable biological hypotheses. We present fast software for advanced gene annotation using semantic similarity for Gene Ontology terms combined with clustering and heat map visualisation. The methodology allows rapid identification of genes sharing the same Gene Ontology cluster. Our R based semantic similarity open-source package has a speed advantage of over 2000-fold compared to existing implementations. From the resulting hierarchical clustering dendrogram genes sharing a GO term can be identified, and their differences in the gene expression patterns can be seen from the heat map. These methods facilitate advanced annotation of genes resulting from data analysis.

  13. Association Study between BDNF Gene Polymorphisms and Autism by Three-Dimensional Gel-Based Microarray

    Directory of Open Access Journals (Sweden)

    Zuhong Lu

    2009-06-01

    Full Text Available Single nucleotide polymorphisms (SNPs are important markers which can be used in association studies searching for susceptible genes of complex diseases. High-throughput methods are needed for SNP genotyping in a large number of samples. In this study, we applied polyacrylamide gel-based microarray combined with dual-color hybridization for association study of four BDNF polymorphisms with autism. All the SNPs in both patients and controls could be analyzed quickly and correctly. Among four SNPs, only C270T polymorphism showed significant differences in the frequency of the allele (χ2 = 7.809, p = 0.005 and genotype (χ2 = 7.800, p = 0.020. In the haplotype association analysis, there was significant difference in global haplotype distribution between the groups (χ2 = 28.19,p = 3.44e-005. We suggest that BDNF has a possible role in the pathogenesis of autism. The study also show that the polyacrylamide gel-based microarray combined with dual-color hybridization is a rapid, simple and high-throughput method for SNPs genotyping, and can be used for association study of susceptible gene with disorders in large samples.

  14. A Discrete Wavelet Based Feature Extraction and Hybrid Classification Technique for Microarray Data Analysis

    Directory of Open Access Journals (Sweden)

    Jaison Bennet

    2014-01-01

    Full Text Available Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN, naive Bayes, and support vector machine (SVM. Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT and moving window technique (MWT is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.

  15. Improving family satisfaction and participation in decision making in an intensive care unit.

    Science.gov (United States)

    Huffines, Meredith; Johnson, Karen L; Smitz Naranjo, Linda L; Lissauer, Matthew E; Fishel, Marmie Ann-Michelle; D'Angelo Howes, Susan M; Pannullo, Diane; Ralls, Mindy; Smith, Ruth

    2013-10-01

    Background Survey data revealed that families of patients in a surgical intensive care unit were not satisfied with their participation in decision making or with how well the multidisciplinary team worked together. Objectives To develop and implement an evidence-based communication algorithm and evaluate its effect in improving satisfaction among patients' families. Methods A multidisciplinary team developed an algorithm that included bundles of communication interventions at 24, 72, and 96 hours after admission to the unit. The algorithm included clinical triggers, which if present escalated the algorithm. A pre-post design using process improvement methods was used to compare families' satisfaction scores before and after implementation of the algorithm. Results Satisfaction scores for participation in decision making (45% vs 68%; z = -2.62, P = .009) and how well the health care team worked together (64% vs 83%; z = -2.10, P = .04) improved significantly after implementation. Conclusions Use of an evidence-based structured communication algorithm may be a way to improve satisfaction of families of intensive care patients with their participation in decision making and their perception of how well the unit's team works together.

  16. Launching a virtual decision lab: development and field-testing of a web-based patient decision support research platform.

    Science.gov (United States)

    Hoffman, Aubri S; Llewellyn-Thomas, Hilary A; Tosteson, Anna N A; O'Connor, Annette M; Volk, Robert J; Tomek, Ivan M; Andrews, Steven B; Bartels, Stephen J

    2014-12-12

    Over 100 trials show that patient decision aids effectively improve patients' information comprehension and values-based decision making. However, gaps remain in our understanding of several fundamental and applied questions, particularly related to the design of interactive, personalized decision aids. This paper describes an interdisciplinary development process for, and early field testing of, a web-based patient decision support research platform, or virtual decision lab, to address these questions. An interdisciplinary stakeholder panel designed the web-based research platform with three components: a) an introduction to shared decision making, b) a web-based patient decision aid, and c) interactive data collection items. Iterative focus groups provided feedback on paper drafts and online prototypes. A field test assessed a) feasibility for using the research platform, in terms of recruitment, usage, and acceptability; and b) feasibility of using the web-based decision aid component, compared to performance of a videobooklet decision aid in clinical care. This interdisciplinary, theory-based, patient-centered design approach produced a prototype for field-testing in six months. Participants (n = 126) reported that: the decision aid component was easy to use (98%), information was clear (90%), the length was appropriate (100%), it was appropriately detailed (90%), and it held their interest (97%). They spent a mean of 36 minutes using the decision aid and 100% preferred using their home/library computer. Participants scored a mean of 75% correct on the Decision Quality, Knowledge Subscale, and 74 out of 100 on the Preparation for Decision Making Scale. Completing the web-based decision aid reduced mean Decisional Conflict scores from 31.1 to 19.5 (p development of a web-based patient decision support research platform that was feasible for use in research studies in terms of recruitment, acceptability, and usage. Within this platform, the web-based

  17. A Reliable and Distributed LIMS for Efficient Management of the Microarray Experiment Environment

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2007-03-01

    Full Text Available A microarray is a principal technology in molecular biology. It generates thousands of expressions of genotypes at once. Typically, a microarray experiment contains many kinds of information, such as gene names, sequences, expression profiles, scanned images, and annotation. So, the organization and analysis of vast amounts of data are required. Microarray LIMS (Laboratory Information Management System provides data management, search, and basic analysis. Recently, microarray joint researches, such as the skeletal system disease and anti-cancer medicine have been widely conducted. This research requires data sharing among laboratories within the joint research group. In this paper, we introduce a web based microarray LIMS, SMILE (Small and solid MIcroarray Lims for Experimenters, especially for shared data management. The data sharing function of SMILE is based on Friend-to-Friend (F2F, which is based on anonymous P2P (Peer-to-Peer, in which people connect directly with their “friends”. It only allows its friends to exchange data directly using IP addresses or digital signatures you trust. In SMILE, there are two types of friends: “service provider”, which provides data, and “client”, which is provided with data. So, the service provider provides shared data only to its clients. SMILE provides useful functions for microarray experiments, such as variant data management, image analysis, normalization, system management, project schedule management, and shared data management. Moreover, it connections with two systems: ArrayMall for analyzing microarray images and GENAW for constructing a genetic network. SMILE is available on http://neobio.cs.pusan.ac.kr:8080/smile.

  18. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  19. The construction and use of bacterial DNA microarrays based on an optimized two-stage PCR strategy

    Directory of Open Access Journals (Sweden)

    Pesta David

    2003-06-01

    Full Text Available Abstract Background DNA microarrays are a powerful tool with important applications such as global gene expression profiling. Construction of bacterial DNA microarrays from genomic sequence data using a two-stage PCR amplification approach for the production of arrayed DNA is attractive because it allows, in principal, the continued re-amplification of DNA fragments and facilitates further utilization of the DNA fragments for additional uses (e.g. over-expression of protein. We describe the successful construction and use of DNA microarrays by the two-stage amplification approach and discuss the technical challenges that were met and resolved during the project. Results Chimeric primers that contained both gene-specific and shared, universal sequence allowed the two-stage amplification of the 3,168 genes identified on the genome of Synechocystis sp. PCC6803, an important prokaryotic model organism for the study of oxygenic photosynthesis. The gene-specific component of the primer was of variable length to maintain uniform annealing temperatures during the 1st round of PCR synthesis, and situated to preserve full-length ORFs. Genes were truncated at 2 kb for efficient amplification, so that about 92% of the PCR fragments were full-length genes. The two-stage amplification had the additional advantage of normalizing the yield of PCR products and this improved the uniformity of DNA features robotically deposited onto the microarray surface. We also describe the techniques utilized to optimize hybridization conditions and signal-to-noise ratio of the transcription profile. The inter-lab transportability was demonstrated by the virtual error-free amplification of the entire genome complement of 3,168 genes using the universal primers in partner labs. The printed slides have been successfully used to identify differentially expressed genes in response to a number of environmental conditions, including salt stress. Conclusions The technique detailed

  20. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Directory of Open Access Journals (Sweden)

    Jiangyi Qin

    Full Text Available A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  1. Emerging Use of Gene Expression Microarrays in Plant Physiology

    Directory of Open Access Journals (Sweden)

    Stephen P. Difazio

    2006-04-01

    Full Text Available Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology were selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.

  2. Identification of Biomarkers for Esophageal Squamous Cell Carcinoma Using Feature Selection and Decision Tree Methods

    Directory of Open Access Journals (Sweden)

    Chun-Wei Tung

    2013-01-01

    Full Text Available Esophageal squamous cell cancer (ESCC is one of the most common fatal human cancers. The identification of biomarkers for early detection could be a promising strategy to decrease mortality. Previous studies utilized microarray techniques to identify more than one hundred genes; however, it is desirable to identify a small set of biomarkers for clinical use. This study proposes a sequential forward feature selection algorithm to design decision tree models for discriminating ESCC from normal tissues. Two potential biomarkers of RUVBL1 and CNIH were identified and validated based on two public available microarray datasets. To test the discrimination ability of the two biomarkers, 17 pairs of expression profiles of ESCC and normal tissues from Taiwanese male patients were measured by using microarray techniques. The classification accuracies of the two biomarkers in all three datasets were higher than 90%. Interpretable decision tree models were constructed to analyze expression patterns of the two biomarkers. RUVBL1 was consistently overexpressed in all three datasets, although we found inconsistent CNIH expression possibly affected by the diverse major risk factors for ESCC across different areas.

  3. Calling biomarkers in milk using a protein microarray on your smartphone

    NARCIS (Netherlands)

    Ludwig, S.K.J.; Tokarski, Christian; Lang, Stefan N.; Ginkel, Van L.A.; Zhu, Hongying; Ozcan, Aydogan; Nielen, M.W.F.

    2015-01-01

    Here we present the concept of a protein microarray-based fluorescence immunoassay for multiple biomarker detection in milk extracts by an ordinary smartphone. A multiplex immunoassay was designed on a microarray chip, having built-in positive and negative quality controls. After the immunoassay

  4. Detection of NASBA amplified bacterial tmRNA molecules on SLICSel designed microarray probes

    Directory of Open Access Journals (Sweden)

    Toome Kadri

    2011-02-01

    Full Text Available Abstract Background We present a comprehensive technological solution for bacterial diagnostics using tmRNA as a marker molecule. A robust probe design algorithm for microbial detection microarray is implemented. The probes were evaluated for specificity and, combined with NASBA (Nucleic Acid Sequence Based Amplification amplification, for sensitivity. Results We developed a new web-based program SLICSel for the design of hybridization probes, based on nearest-neighbor thermodynamic modeling. A SLICSel minimum binding energy difference criterion of 4 kcal/mol was sufficient to design of Streptococcus pneumoniae tmRNA specific microarray probes. With lower binding energy difference criteria, additional hybridization specificity tests on the microarray were needed to eliminate non-specific probes. Using SLICSel designed microarray probes and NASBA we were able to detect S. pneumoniae tmRNA from a series of total RNA dilutions equivalent to the RNA content of 0.1-10 CFU. Conclusions The described technological solution and both its separate components SLICSel and NASBA-microarray technology independently are applicative for many different areas of microbial diagnostics.

  5. Detection of NASBA amplified bacterial tmRNA molecules on SLICSel designed microarray probes

    LENUS (Irish Health Repository)

    Scheler, Ott

    2011-02-28

    Abstract Background We present a comprehensive technological solution for bacterial diagnostics using tmRNA as a marker molecule. A robust probe design algorithm for microbial detection microarray is implemented. The probes were evaluated for specificity and, combined with NASBA (Nucleic Acid Sequence Based Amplification) amplification, for sensitivity. Results We developed a new web-based program SLICSel for the design of hybridization probes, based on nearest-neighbor thermodynamic modeling. A SLICSel minimum binding energy difference criterion of 4 kcal\\/mol was sufficient to design of Streptococcus pneumoniae tmRNA specific microarray probes. With lower binding energy difference criteria, additional hybridization specificity tests on the microarray were needed to eliminate non-specific probes. Using SLICSel designed microarray probes and NASBA we were able to detect S. pneumoniae tmRNA from a series of total RNA dilutions equivalent to the RNA content of 0.1-10 CFU. Conclusions The described technological solution and both its separate components SLICSel and NASBA-microarray technology independently are applicative for many different areas of microbial diagnostics.

  6. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  7. Are patient decision aids the best way to improve clinical decision making? Report of the IPDAS Symposium.

    Science.gov (United States)

    Holmes-Rovner, Margaret; Nelson, Wendy L; Pignone, Michael; Elwyn, Glyn; Rovner, David R; O'Connor, Annette M; Coulter, Angela; Correa-de-Araujo, Rosaly

    2007-01-01

    This article reports on the International Patient Decision Aid Standards Symposium held in 2006 at the annual meeting of the Society for Medical Decision Making in Cambridge, Massachusetts. The symposium featured a debate regarding the proposition that "decision aids are the best way to improve clinical decision making.'' The formal debate addressed the theoretical problem of the appropriate gold standard for an improved decision, efficacy of decision aids, and prospects for implementation. Audience comments and questions focused on both theory and practice: the often unacknowledged roots of decision aids in expected utility theory and the practical problems of limited patient decision aid implementation in health care. The participants' vote on the proposition was approximately half for and half against.

  8. The Arabidopsis co-expression tool (act): a WWW-based tool and database for microarray-based gene expression analysis

    DEFF Research Database (Denmark)

    Jen, C. H.; Manfield, I. W.; Michalopoulos, D. W.

    2006-01-01

    be examined using the novel clique finder tool to determine the sets of genes most likely to be regulated in a similar manner. In combination, these tools offer three levels of analysis: creation of correlation lists of co-expressed genes, refinement of these lists using two-dimensional scatter plots......We present a new WWW-based tool for plant gene analysis, the Arabidopsis Co-Expression Tool (act) , based on a large Arabidopsis thaliana microarray data set obtained from the Nottingham Arabidopsis Stock Centre. The co-expression analysis tool allows users to identify genes whose expression...

  9. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  10. Microarray-based mutation detection and phenotypic characterization in Korean patients with retinitis pigmentosa

    Science.gov (United States)

    Kim, Cinoo; Kim, Kwang Joong; Bok, Jeong; Lee, Eun-Ju; Kim, Dong-Joon; Oh, Ji Hee; Park, Sung Pyo; Shin, Joo Young; Lee, Jong-Young

    2012-01-01

    Purpose To evaluate microarray-based genotyping technology for the detection of mutations responsible for retinitis pigmentosa (RP) and to perform phenotypic characterization of patients with pathogenic mutations. Methods DNA from 336 patients with RP and 360 controls was analyzed using the GoldenGate assay with microbeads containing 95 previously reported disease-associated mutations from 28 RP genes. Mutations identified by microarray-based genotyping were confirmed by direct sequencing. Segregation analysis and phenotypic characterization were performed in patients with mutations. The disease severity was assessed by visual acuity, electroretinography, optical coherence tomography, and kinetic perimetry. Results Ten RP-related mutations of five RP genes (PRP3 pre-mRNA processing factor 3 homolog [PRPF3], rhodopsin [RHO], phosphodiesterase 6B [PDE6B], peripherin 2 [PRPH2], and retinitis pigmentosa 1 [RP1]) were identified in 26 of the 336 patients (7.7%) and in six of the 360 controls (1.7%). The p.H557Y mutation in PDE6B, which was homozygous in four patients and heterozygous in nine patients, was the most frequent mutation (2.5%). Mutation segregation was assessed in four families. Among the patients with missense mutations, the most severe phenotype occurred in patients with p.D984G in RP1; less severe phenotypes occurred in patients with p.R135W in RHO; a relatively moderate phenotype occurred in patients with p.T494M in PRPF3, p.H557Y in PDE6B, or p.W316G in PRPH2; and a mild phenotype was seen in a patient with p.D190N in RHO. Conclusions The results reveal that the GoldenGate assay may not be an efficient method for molecular diagnosis in RP patients with rare mutations, although it has proven to be reliable and efficient for high-throughput genotyping of single-nucleotide polymorphisms. The clinical features varied according to the mutations. Continuous effort to identify novel RP genes and mutations in a population is needed to improve the efficiency and

  11. How the RNA isolation method can affect microRNA microarray results

    DEFF Research Database (Denmark)

    Podolska, Agnieszka; Kaczkowski, Bogumil; Litman, Thomas

    2011-01-01

    RNA microarray analysis on porcine brain tissue. One method is a phenol-guanidine isothiocyanate-based procedure that permits isolation of total RNA. The second method, miRVana™ microRNA isolation, is column based and recovers the small RNA fraction alone. We found that microarray analyses give different results...... that depend on the RNA fraction used, in particular because some microRNAs appear very sensitive to the RNA isolation method. We conclude that precautions need to be taken when comparing microarray studies based on RNA isolated with different methods.......The quality of RNA is crucial in gene expression experiments. RNA degradation interferes in the measurement of gene expression, and in this context, microRNA quantification can lead to an incorrect estimation. In the present study, two different RNA isolation methods were used to perform micro...

  12. Support vector machine and principal component analysis for microarray data classification

    Science.gov (United States)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  13. Multi-task feature selection in microarray data by binary integer programming.

    Science.gov (United States)

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  14. Applications of decision theory to test-based decision making

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1987-01-01

    The use of Bayesian decision theory to solve problems in test-based decision making is discussed. Four basic decision problems are distinguished: (1) selection; (2) mastery; (3) placement; and (4) classification, the situation where each treatment has its own criterion. Each type of decision can be

  15. A kernel-based multivariate feature selection method for microarray data classification.

    Directory of Open Access Journals (Sweden)

    Shiquan Sun

    Full Text Available High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.

  16. The EADGENE Microarray Data Analysis Workshop

    DEFF Research Database (Denmark)

    de Koning, Dirk-Jan; Jaffrézic, Florence; Lund, Mogens Sandø

    2007-01-01

    Microarray analyses have become an important tool in animal genomics. While their use is becoming widespread, there is still a lot of ongoing research regarding the analysis of microarray data. In the context of a European Network of Excellence, 31 researchers representing 14 research groups from...... 10 countries performed and discussed the statistical analyses of real and simulated 2-colour microarray data that were distributed among participants. The real data consisted of 48 microarrays from a disease challenge experiment in dairy cattle, while the simulated data consisted of 10 microarrays...... statistical weights, to omitting a large number of spots or omitting entire slides. Surprisingly, these very different approaches gave quite similar results when applied to the simulated data, although not all participating groups analysed both real and simulated data. The workshop was very successful...

  17. Fast Gene Ontology based clustering for microarray experiments

    Directory of Open Access Journals (Sweden)

    Ovaska Kristian

    2008-11-01

    Full Text Available Abstract Background Analysis of a microarray experiment often results in a list of hundreds of disease-associated genes. In order to suggest common biological processes and functions for these genes, Gene Ontology annotations with statistical testing are widely used. However, these analyses can produce a very large number of significantly altered biological processes. Thus, it is often challenging to interpret GO results and identify novel testable biological hypotheses. Results We present fast software for advanced gene annotation using semantic similarity for Gene Ontology terms combined with clustering and heat map visualisation. The methodology allows rapid identification of genes sharing the same Gene Ontology cluster. Conclusion Our R based semantic similarity open-source package has a speed advantage of over 2000-fold compared to existing implementations. From the resulting hierarchical clustering dendrogram genes sharing a GO term can be identified, and their differences in the gene expression patterns can be seen from the heat map. These methods facilitate advanced annotation of genes resulting from data analysis.

  18. DNA microarray data and contextual analysis of correlation graphs

    Directory of Open Access Journals (Sweden)

    Hingamp Pascal

    2003-04-01

    Full Text Available Abstract Background DNA microarrays are used to produce large sets of expression measurements from which specific biological information is sought. Their analysis requires efficient and reliable algorithms for dimensional reduction, classification and annotation. Results We study networks of co-expressed genes obtained from DNA microarray experiments. The mathematical concept of curvature on graphs is used to group genes or samples into clusters to which relevant gene or sample annotations are automatically assigned. Application to publicly available yeast and human lymphoma data demonstrates the reliability of the method in spite of its simplicity, especially with respect to the small number of parameters involved. Conclusions We provide a method for automatically determining relevant gene clusters among the many genes monitored with microarrays. The automatic annotations and the graphical interface improve the readability of the data. A C++ implementation, called Trixy, is available from http://tagc.univ-mrs.fr/bioinformatics/trixy.html.

  19. Leukemia and colon tumor detection based on microarray data classification using momentum backpropagation and genetic algorithm as a feature selection method

    Science.gov (United States)

    Wisesty, Untari N.; Warastri, Riris S.; Puspitasari, Shinta Y.

    2018-03-01

    Cancer is one of the major causes of mordibility and mortality problems in the worldwide. Therefore, the need of a system that can analyze and identify a person suffering from a cancer by using microarray data derived from the patient’s Deoxyribonucleic Acid (DNA). But on microarray data has thousands of attributes, thus making the challenges in data processing. This is often referred to as the curse of dimensionality. Therefore, in this study built a system capable of detecting a patient whether contracted cancer or not. The algorithm used is Genetic Algorithm as feature selection and Momentum Backpropagation Neural Network as a classification method, with data used from the Kent Ridge Bio-medical Dataset. Based on system testing that has been done, the system can detect Leukemia and Colon Tumor with best accuracy equal to 98.33% for colon tumor data and 100% for leukimia data. Genetic Algorithm as feature selection algorithm can improve system accuracy, which is from 64.52% to 98.33% for colon tumor data and 65.28% to 100% for leukemia data, and the use of momentum parameters can accelerate the convergence of the system in the training process of Neural Network.

  20. Using M and S to Improve Human Decision Making and Achieve Effective Problem Solving in an International Environment

    Science.gov (United States)

    Christie, Vanessa L.; Landess, David J.

    2012-01-01

    In the international arena, decision makers are often swayed away from fact-based analysis by their own individual cultural and political bias. Modeling and Simulation-based training can raise awareness of individual predisposition and improve the quality of decision making by focusing solely on fact vice perception. This improved decision making methodology will support the multinational collaborative efforts of military and civilian leaders to solve challenges more effectively. The intent of this experimental research is to create a framework that allows decision makers to "come to the table" with the latest and most significant facts necessary to determine an appropriate solution for any given contingency.

  1. Deep brain stimulation of the subthalamic nucleus improves reward-based decision-learning in Parkinson's disease

    NARCIS (Netherlands)

    van Wouwe, N.C.; Ridderinkhof, K.R.; van den Wildenberg, W.P.M.; Band, G.P.H.; Abisogun, A.; Elias, W.J.; Frysinger, R.; Wylie, S.A.

    2011-01-01

    Recently, the subthalamic nucleus (STN) has been shown to be critically involved in decision-making, action selection, and motor control. Here we investigate the effect of deep brain stimulation (DBS) of the STN on reward-based decision-learning in patients diagnosed with Parkinson's disease (PD).

  2. Detection and identification of intestinal pathogenic bacteria by hybridization to oligonucleotide microarrays

    Science.gov (United States)

    Jin, Lian-Qun; Li, Jun-Wen; Wang, Sheng-Qi; Chao, Fu-Huan; Wang, Xin-Wei; Yuan, Zheng-Quan

    2005-01-01

    AIM: To detect the common intestinal pathogenic bacteria quickly and accurately. METHODS: A rapid (<3 h) experimental procedure was set up based upon the gene chip technology. Target genes were amplified and hybridized by oligonucleotide microarrays. RESULTS: One hundred and seventy strains of bacteria in pure culture belonging to 11 genera were successfully discriminated under comparatively same conditions, and a series of specific hybridization maps corresponding to each kind of bacteria were obtained. When this method was applied to 26 divided cultures, 25 (96.2%) were identified. CONCLUSION: Salmonella sp., Escherichia coli, Shigella sp., Listeria monocytogenes, Vibrio parahaemolyticus, Staphylococcus aureus, Proteus sp., Bacillus cereus, Vibrio cholerae, Enterococcus faecalis, Yersinia enterocolitica, and Campylobacter jejuni can be detected and identified by our microarrays. The accuracy, range, and discrimination power of this assay can be continually improved by adding further oligonucleotides to the arrays without any significant increase of complexity or cost. PMID:16437687

  3. Brachyury, SOX-9, and Podoplanin, New Markers in the Skull Base Chordoma Vs Chondrosarcoma Differential: A Tissue Microarray Based Comparative Analysis

    Science.gov (United States)

    Oakley, GJ; Fuhrer, K; Seethala, RR

    2014-01-01

    The distinction between chondrosarcoma and chordoma of the skull base/head and neck is prognostically important; however, both have sufficient morphologic overlap to make distinction difficult. As a result of gene expression studies, additional candidate markers have been proposed to help in this distinction. Hence, we sought to evaluate the performance of new markers: brachyury, SOX-9, and podoplanin alongside the more traditional markers glial fibrillary acid protein, carcinoembryonic antigen, CD24 and epithelial membrane antigen. Paraffin blocks from 103 skull base/head and neck chondroid tumors from 70 patients were retrieved (1969-2007). Diagnoses were made based on morphology and/or whole section immunohistochemistry for cytokeratin and S100 protein yielding 79 chordomas (comprising 45 chondroid chordomas and 34 conventional chordomas), and 24 chondrosarcomas. A tissue microarray containing 0.6 mm cores of each tumor in triplicate was constructed using a manual array (MTA-1, Beecher Instruments). For visualization of staining, the ImmPRESS detection system (Vector Laboratories) with 2 - diaminobenzidine substrate was used. Sensitivities and specificities were calculated for each marker. Core loss from the microarray ranged from 25-29% yielding 66-78 viable cases per stain. The classic marker, cytokeratin, still has the best performance characteristics. When combined with brachyury, accuracy improves slightly (sensitivity and specificity for detection of chordoma 98% and 100%, respectively). Positivity for both epithelial membrane antigen and AE1/AE3 had a sensitivity of 90% and a specificity of 100% for detecting chordoma in this study. SOX-9 is apparently common to both notochordal and cartilaginous differentiation, and is not useful in the chordoma-chondrosarcoma differential diagnosis. Glial fibrillary acid protein, carcinoembryonic antigen, CD24, and epithelial membrane antigen did not outperform other markers, and are less useful in the diagnosis of

  4. Experience With Rapid Microarray-Based Diagnostic Technology and Antimicrobial Stewardship for Patients With Gram-Positive Bacteremia.

    Science.gov (United States)

    Neuner, Elizabeth A; Pallotta, Andrea M; Lam, Simon W; Stowe, David; Gordon, Steven M; Procop, Gary W; Richter, Sandra S

    2016-11-01

    OBJECTIVE To describe the impact of rapid diagnostic microarray technology and antimicrobial stewardship for patients with Gram-positive blood cultures. DESIGN Retrospective pre-intervention/post-intervention study. SETTING A 1,200-bed academic medical center. PATIENTS Inpatients with blood cultures positive for Staphylococcus aureus, Enterococcus faecalis, E. faecium, Streptococcus pneumoniae, S. pyogenes, S. agalactiae, S. anginosus, Streptococcus spp., and Listeria monocytogenes during the 6 months before and after implementation of Verigene Gram-positive blood culture microarray (BC-GP) with an antimicrobial stewardship intervention. METHODS Before the intervention, no rapid diagnostic technology was used or antimicrobial stewardship intervention was undertaken, except for the use of peptide nucleic acid fluorescent in situ hybridization and MRSA agar to identify staphylococcal isolates. After the intervention, all Gram-positive blood cultures underwent BC-GP microarray and the antimicrobial stewardship intervention consisting of real-time notification and pharmacist review. RESULTS In total, 513 patients with bacteremia were included in this study: 280 patients with S. aureus, 150 patients with enterococci, 82 patients with stretococci, and 1 patient with L. monocytogenes. The number of antimicrobial switches was similar in the pre-BC-GP (52%; 155 of 300) and post-BC-GP (50%; 107 of 213) periods. The time to antimicrobial switch was significantly shorter in the post-BC-GP group than in the pre-BC-GP group: 48±41 hours versus 75±46 hours, respectively (P<.001). The most common antimicrobial switch was de-escalation and time to de-escalation, was significantly shorter in the post-BC-GP group than in the pre-BC-GP group: 53±41 hours versus 82±48 hours, respectively (P<.001). There was no difference in mortality or hospital length of stay as a result of the intervention. CONCLUSIONS The combination of a rapid microarray diagnostic test with an antimicrobial

  5. A Combinatory Approach for Selecting Prognostic Genes in Microarray Studies of Tumour Survivals

    Directory of Open Access Journals (Sweden)

    Qihua Tan

    2009-01-01

    Full Text Available Different from significant gene expression analysis which looks for genes that are differentially regulated, feature selection in the microarray-based prognostic gene expression analysis aims at finding a subset of marker genes that are not only differentially expressed but also informative for prediction. Unfortunately feature selection in literature of microarray study is predominated by the simple heuristic univariate gene filter paradigm that selects differentially expressed genes according to their statistical significances. We introduce a combinatory feature selection strategy that integrates differential gene expression analysis with the Gram-Schmidt process to identify prognostic genes that are both statistically significant and highly informative for predicting tumour survival outcomes. Empirical application to leukemia and ovarian cancer survival data through-within- and cross-study validations shows that the feature space can be largely reduced while achieving improved testing performances.

  6. Accounting for one-channel depletion improves missing value imputation in 2-dye microarray data.

    Science.gov (United States)

    Ritz, Cecilia; Edén, Patrik

    2008-01-19

    For 2-dye microarray platforms, some missing values may arise from an un-measurably low RNA expression in one channel only. Information of such "one-channel depletion" is so far not included in algorithms for imputation of missing values. Calculating the mean deviation between imputed values and duplicate controls in five datasets, we show that KNN-based imputation gives a systematic bias of the imputed expression values of one-channel depleted spots. Evaluating the correction of this bias by cross-validation showed that the mean square deviation between imputed values and duplicates were reduced up to 51%, depending on dataset. By including more information in the imputation step, we more accurately estimate missing expression values.

  7. Microarray-based analysis of IncA/C plasmid-associated genes from multidrug-resistant Salmonella enterica.

    Science.gov (United States)

    Lindsey, Rebecca L; Frye, Jonathan G; Fedorka-Cray, Paula J; Meinersmann, Richard J

    2011-10-01

    In the family Enterobacteriaceae, plasmids have been classified according to 27 incompatibility (Inc) or replicon types that are based on the inability of different plasmids with the same replication mechanism to coexist in the same cell. Certain replicon types such as IncA/C are associated with multidrug resistance (MDR). We developed a microarray that contains 286 unique 70-mer oligonucleotide probes based on sequences from five IncA/C plasmids: pYR1 (Yersinia ruckeri), pPIP1202 (Yersinia pestis), pP99-018 (Photobacterium damselae), pSN254 (Salmonella enterica serovar Newport), and pP91278 (Photobacterium damselae). DNA from 59 Salmonella enterica isolates was hybridized to the microarray and analyzed for the presence or absence of genes. These isolates represented 17 serovars from 14 different animal hosts and from different geographical regions in the United States. Qualitative cluster analysis was performed using CLUSTER 3.0 to group microarray hybridization results. We found that IncA/C plasmids occurred in two lineages distinguished by a major insertion-deletion (indel) region that contains genes encoding mostly hypothetical proteins. The most variable genes were represented by transposon-associated genes as well as four antimicrobial resistance genes (aphA, merP, merA, and aadA). Sixteen mercury resistance genes were identified and highly conserved, suggesting that mercury ion-related exposure is a stronger pressure than anticipated. We used these data to construct a core IncA/C genome and an accessory genome. The results of our studies suggest that the transfer of antimicrobial resistance determinants by transfer of IncA/C plasmids is somewhat less common than exchange within the plasmids orchestrated by transposable elements, such as transposons, integrating and conjugative elements (ICEs), and insertion sequence common regions (ISCRs), and thus pose less opportunity for exchange of antimicrobial resistance.

  8. Significance analysis of lexical bias in microarray data

    Directory of Open Access Journals (Sweden)

    Falkow Stanley

    2003-04-01

    Full Text Available Abstract Background Genes that are determined to be significantly differentially regulated in microarray analyses often appear to have functional commonalities, such as being components of the same biochemical pathway. This results in certain words being under- or overrepresented in the list of genes. Distinguishing between biologically meaningful trends and artifacts of annotation and analysis procedures is of the utmost importance, as only true biological trends are of interest for further experimentation. A number of sophisticated methods for identification of significant lexical trends are currently available, but these methods are generally too cumbersome for practical use by most microarray users. Results We have developed a tool, LACK, for calculating the statistical significance of apparent lexical bias in microarray datasets. The frequency of a user-specified list of search terms in a list of genes which are differentially regulated is assessed for statistical significance by comparison to randomly generated datasets. The simplicity of the input files and user interface targets the average microarray user who wishes to have a statistical measure of apparent lexical trends in analyzed datasets without the need for bioinformatics skills. The software is available as Perl source or a Windows executable. Conclusion We have used LACK in our laboratory to generate biological hypotheses based on our microarray data. We demonstrate the program's utility using an example in which we confirm significant upregulation of SPI-2 pathogenicity island of Salmonella enterica serovar Typhimurium by the cation chelator dipyridyl.

  9. Microarray expression profiling of human dental pulp from single subject.

    Science.gov (United States)

    Tete, Stefano; Mastrangelo, Filiberto; Scioletti, Anna Paola; Tranasi, Michelangelo; Raicu, Florina; Paolantonio, Michele; Stuppia, Liborio; Vinci, Raffaele; Gherlone, Enrico; Ciampoli, Cristian; Sberna, Maria Teresa; Conti, Pio

    2008-01-01

    Microarray is a recently developed simultaneous analysis of expression patterns of thousand of genes. The aim of this research was to evaluate the expression profile of human healthy dental pulp in order to find the presence of genes activated and encoding for proteins involved in the physiological process of human dental pulp. We report data obtained by analyzing expression profiles of human tooth pulp from single subjects, using an approach based on the amplification of the total RNA. Experiments were performed on a high-density array able to analyse about 21,000 oligonucleotide sequences of about 70 bases in duplicate, using an approach based on the amplification of the total RNA from the pulp of a single tooth. Obtained data were analyzed using the S.A.M. system (Significance Analysis of Microarray) and genes were merged according to their molecular functions and biological process by the Onto-Express software. The microarray analysis revealed 362 genes with specific pulp expression. Genes showing significant high expression were classified in genes involved in tooth development, protoncogenes, genes of collagen, DNAse, Metallopeptidases and Growth factors. We report a microarray analysis, carried out by extraction of total RNA from specimens of healthy human dental pulp tissue. This approach represents a powerful tool in the study of human normal and pathological pulp, allowing minimization of the genetic variability due to the pooling of samples from different individuals.

  10. A decision support model for improving a multi-family housing complex based on CO2 emission from electricity consumption.

    Science.gov (United States)

    Hong, Taehoon; Koo, Choongwan; Kim, Hyunjoong

    2012-12-15

    The number of deteriorated multi-family housing complexes in South Korea continues to rise, and consequently their electricity consumption is also increasing. This needs to be addressed as part of the nation's efforts to reduce energy consumption. The objective of this research was to develop a decision support model for determining the need to improve multi-family housing complexes. In this research, 1664 cases located in Seoul were selected for model development. The research team collected the characteristics and electricity energy consumption data of these projects in 2009-2010. The following were carried out in this research: (i) using the Decision Tree, multi-family housing complexes were clustered based on their electricity energy consumption; (ii) using Case-Based Reasoning, similar cases were retrieved from the same cluster; and (iii) using a combination of Multiple Regression Analysis, Artificial Neural Network, and Genetic Algorithm, the prediction performance of the developed model was improved. The results of this research can be used as follows: (i) as basic research data for continuously managing several energy consumption data of multi-family housing complexes; (ii) as advanced research data for predicting energy consumption based on the project characteristics; (iii) as practical research data for selecting the most optimal multi-family housing complex with the most potential in terms of energy savings; and (iv) as consistent and objective criteria for incentives and penalties. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Decision optimization of case-based computer-aided decision systems using genetic algorithms with application to mammography

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Habas, Piotr A; Zurada, Jacek M; Tourassi, Georgia D

    2008-01-01

    This paper presents an optimization framework for improving case-based computer-aided decision (CB-CAD) systems. The underlying hypothesis of the study is that each example in the knowledge database of a medical decision support system has different importance in the decision making process. A new decision algorithm incorporating an importance weight for each example is proposed to account for these differences. The search for the best set of importance weights is defined as an optimization problem and a genetic algorithm is employed to solve it. The optimization process is tailored to maximize the system's performance according to clinically relevant evaluation criteria. The study was performed using a CAD system developed for the classification of regions of interests (ROIs) in mammograms as depicting masses or normal tissue. The system was constructed and evaluated using a dataset of ROIs extracted from the Digital Database for Screening Mammography (DDSM). Experimental results show that, according to receiver operator characteristic (ROC) analysis, the proposed method significantly improves the overall performance of the CAD system as well as its average specificity for high breast mass detection rates

  12. Improving Decision Making for Sustainability: A Case Study from New Zealand

    Science.gov (United States)

    Geertshuis, Susan

    2009-01-01

    Purpose: The purpose of this paper is to describe and evidence a means of improving decision making within a sustainable resource management context. Design/methodology/approach: A set of competencies required by effective decision makers is developed. Methods of improving decision making are reviewed and used to develop a continuing education…

  13. DNA microarrays : a molecular cloning manual

    National Research Council Canada - National Science Library

    Sambrook, Joseph; Bowtell, David

    2002-01-01

    .... DNA Microarrays provides authoritative, detailed instruction on the design, construction, and applications of microarrays, as well as comprehensive descriptions of the software tools and strategies...

  14. Diagnostic and analytical applications of protein microarrays

    DEFF Research Database (Denmark)

    Dufva, Hans Martin; Christensen, C.B.V.

    2005-01-01

    DNA microarrays have changed the field of biomedical sciences over the past 10 years. For several reasons, antibody and other protein microarrays have not developed at the same rate. However, protein and antibody arrays have emerged as a powerful tool to complement DNA microarrays during the post...

  15. PATMA: parser of archival tissue microarray

    Directory of Open Access Journals (Sweden)

    Lukasz Roszkowiak

    2016-12-01

    Full Text Available Tissue microarrays are commonly used in modern pathology for cancer tissue evaluation, as it is a very potent technique. Tissue microarray slides are often scanned to perform computer-aided histopathological analysis of the tissue cores. For processing the image, splitting the whole virtual slide into images of individual cores is required. The only way to distinguish cores corresponding to specimens in the tissue microarray is through their arrangement. Unfortunately, distinguishing the correct order of cores is not a trivial task as they are not labelled directly on the slide. The main aim of this study was to create a procedure capable of automatically finding and extracting cores from archival images of the tissue microarrays. This software supports the work of scientists who want to perform further image processing on single cores. The proposed method is an efficient and fast procedure, working in fully automatic or semi-automatic mode. A total of 89% of punches were correctly extracted with automatic selection. With an addition of manual correction, it is possible to fully prepare the whole slide image for extraction in 2 min per tissue microarray. The proposed technique requires minimum skill and time to parse big array of cores from tissue microarray whole slide image into individual core images.

  16. Feature selection and classification of MAQC-II breast cancer and multiple myeloma microarray gene expression data.

    Directory of Open Access Journals (Sweden)

    Qingzhong Liu

    Full Text Available Microarray data has a high dimension of variables but available datasets usually have only a small number of samples, thereby making the study of such datasets interesting and challenging. In the task of analyzing microarray data for the purpose of, e.g., predicting gene-disease association, feature selection is very important because it provides a way to handle the high dimensionality by exploiting information redundancy induced by associations among genetic markers. Judicious feature selection in microarray data analysis can result in significant reduction of cost while maintaining or improving the classification or prediction accuracy of learning machines that are employed to sort out the datasets. In this paper, we propose a gene selection method called Recursive Feature Addition (RFA, which combines supervised learning and statistical similarity measures. We compare our method with the following gene selection methods: Support Vector Machine Recursive Feature Elimination (SVMRFE, Leave-One-Out Calculation Sequential Forward Selection (LOOCSFS, Gradient based Leave-one-out Gene Selection (GLGS. To evaluate the performance of these gene selection methods, we employ several popular learning classifiers on the MicroArray Quality Control phase II on predictive modeling (MAQC-II breast cancer dataset and the MAQC-II multiple myeloma dataset. Experimental results show that gene selection is strictly paired with learning classifier. Overall, our approach outperforms other compared methods. The biological functional analysis based on the MAQC-II breast cancer dataset convinced us to apply our method for phenotype prediction. Additionally, learning classifiers also play important roles in the classification of microarray data and our experimental results indicate that the Nearest Mean Scale Classifier (NMSC is a good choice due to its prediction reliability and its stability across the three performance measurements: Testing accuracy, MCC values, and

  17. Prevalence, identification by a DNA microarray-based assay of human and food isolates Listeria spp. from Tunisia.

    Science.gov (United States)

    Hmaïed, F; Helel, S; Le Berre, V; François, J-M; Leclercq, A; Lecuit, M; Smaoui, H; Kechrid, A; Boudabous, A; Barkallah, I

    2014-02-01

    We aimed at evaluating the prevalence of Listeria species isolated from food samples and characterizing food and human cases isolates. Between 2005 and 2007, one hundred food samples collected in the markets of Tunis were analysed in our study. Five strains of Listeria monocytogenes responsible for human listeriosis isolated in hospital of Tunis were included. Multiplex PCR serogrouping and pulsed field gel electrophoresis (PFGE) applying the enzyme AscI and ApaI were used for the characterization of isolates of L. monocytogenes. We have developed a rapid microarray-based assay to a reliable discrimination of species within the Listeria genus. The prevalence of Listeria spp. in food samples was estimated at 14% by using classical biochemical identification. Two samples were assigned to L. monocytogenes and 12 to L. innocua. DNA microarray allowed unambiguous identification of Listeria species. Our results obtained by microarray-based assay were in accordance with the biochemical identification. The two food L. monocytogenes isolates were assigned to the PCR serogroup IIa (serovar 1/2a). Whereas human L. monocytogenes isolates were of PCR serogroup IVb, (serovars 4b). These isolates present a high similarity in PFGE. Food L. monocytogenes isolates were classified into two different pulsotypes. These pulsotypes were different from that of the five strains responsible for the human cases. We confirmed the presence of Listeria spp. in variety of food samples in Tunis. Increased food and clinical surveillance must be taken into consideration in Tunisia to identify putative infections sources. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  18. Aggregate assessments support improved operational decision making

    International Nuclear Information System (INIS)

    Bauer, R.

    2003-01-01

    At Darlington Nuclear aggregate assessment of plant conditions is carried out in support of Operational Decision Making. This paper discusses how aggregate assessments have been applied to Operator Workarounds leading to improved prioritisation and alignment of work programs in different departments. As well, aggregate assessment of plant and human performance factors has been carried out to identify criteria which support conservative decision making in the main control room during unit transients. (author)

  19. Microarray-based method for the parallel analysis of genotypes and expression profiles of wood-forming tissues in Eucalyptus grandis

    CSIR Research Space (South Africa)

    Barros, E

    2009-05-01

    Full Text Available of Eucalyptus grandis planting stock that exhibit preferred wood qualities is thus a priority of the South African forestry industry. The researchers used microarray-based DNA-amplified fragment length polymorphism (AFLP) analysis in combination with expression...

  20. A Versatile Microarray Platform for Capturing Rare Cells

    Science.gov (United States)

    Brinkmann, Falko; Hirtz, Michael; Haller, Anna; Gorges, Tobias M.; Vellekoop, Michael J.; Riethdorf, Sabine; Müller, Volkmar; Pantel, Klaus; Fuchs, Harald

    2015-10-01

    Analyses of rare events occurring at extremely low frequencies in body fluids are still challenging. We established a versatile microarray-based platform able to capture single target cells from large background populations. As use case we chose the challenging application of detecting circulating tumor cells (CTCs) - about one cell in a billion normal blood cells. After incubation with an antibody cocktail, targeted cells are extracted on a microarray in a microfluidic chip. The accessibility of our platform allows for subsequent recovery of targets for further analysis. The microarray facilitates exclusion of false positive capture events by co-localization allowing for detection without fluorescent labelling. Analyzing blood samples from cancer patients with our platform reached and partly outreached gold standard performance, demonstrating feasibility for clinical application. Clinical researchers free choice of antibody cocktail without need for altered chip manufacturing or incubation protocol, allows virtual arbitrary targeting of capture species and therefore wide spread applications in biomedical sciences.

  1.  DNA microarray-based gene expression profiling in diagnosis, assessing prognosis and predicting response to therapy in colorectal cancer

    Directory of Open Access Journals (Sweden)

    Przemysław Kwiatkowski

    2012-06-01

    Full Text Available  Colorectal cancer is the most common cancer of the gastrointestinal tract. It is considered as a biological model of a certain type of cancerogenesis process in which progression from an early to late stage adenoma and cancer is accompanied by distinct genetic alterations.Clinical and pathological parameters commonly used in clinical practice are often insufficient to determine groups of patients suitable for personalized treatment. Moreover, reliable molecular markers with high prognostic value have not yet been determined. Molecular studies using DNA-based microarrays have identified numerous genes involved in cell proliferation and differentiation during the process of cancerogenesis. Assessment of the genetic profile of colorectal cancer using the microarray technique might be a useful tool in determining the groups of patients with different clinical outcomes who would benefit from additional personalized treatment.The main objective of this study was to present the current state of knowledge on the practical application of gene profiling techniques using microarrays for determining diagnosis, prognosis and response to treatment in colorectal cancer.

  2. Fluorescent labeling of NASBA amplified tmRNA molecules for microarray applications

    Directory of Open Access Journals (Sweden)

    Kaplinski Lauris

    2009-05-01

    Full Text Available Abstract Background Here we present a novel promising microbial diagnostic method that combines the sensitivity of Nucleic Acid Sequence Based Amplification (NASBA with the high information content of microarray technology for the detection of bacterial tmRNA molecules. The NASBA protocol was modified to include aminoallyl-UTP (aaUTP molecules that were incorporated into nascent RNA during the NASBA reaction. Post-amplification labeling with fluorescent dye was carried out subsequently and tmRNA hybridization signal intensities were measured using microarray technology. Significant optimization of the labeled NASBA protocol was required to maintain the required sensitivity of the reactions. Results Two different aaUTP salts were evaluated and optimum final concentrations were identified for both. The final 2 mM concentration of aaUTP Li-salt in NASBA reaction resulted in highest microarray signals overall, being twice as high as the strongest signals with 1 mM aaUTP Na-salt. Conclusion We have successfully demonstrated efficient combination of NASBA amplification technology with microarray based hybridization detection. The method is applicative for many different areas of microbial diagnostics including environmental monitoring, bio threat detection, industrial process monitoring and clinical microbiology.

  3. Position dependent mismatch discrimination on DNA microarrays – experiments and model

    Directory of Open Access Journals (Sweden)

    Michel Wolfgang

    2008-12-01

    Full Text Available Abstract Background The propensity of oligonucleotide strands to form stable duplexes with complementary sequences is fundamental to a variety of biological and biotechnological processes as various as microRNA signalling, microarray hybridization and PCR. Yet our understanding of oligonucleotide hybridization, in particular in presence of surfaces, is rather limited. Here we use oligonucleotide microarrays made in-house by optically controlled DNA synthesis to produce probe sets comprising all possible single base mismatches and base bulges for each of 20 sequence motifs under study. Results We observe that mismatch discrimination is mostly determined by the defect position (relative to the duplex ends as well as by the sequence context. We investigate the thermodynamics of the oligonucleotide duplexes on the basis of double-ended molecular zipper. Theoretical predictions of defect positional influence as well as long range sequence influence agree well with the experimental results. Conclusion Molecular zipping at thermodynamic equilibrium explains the binding affinity of mismatched DNA duplexes on microarrays well. The position dependent nearest neighbor model (PDNN can be inferred from it. Quantitative understanding of microarray experiments from first principles is in reach.

  4. 16S rRNA based microarray analysis of ten periodontal bacteria in patients with different forms of periodontitis.

    Science.gov (United States)

    Topcuoglu, Nursen; Kulekci, Guven

    2015-10-01

    DNA microarray analysis is a computer based technology, that a reverse capture, which targets 10 periodontal bacteria (ParoCheck) is available for rapid semi-quantitative determination. The aim of this three-year retrospective study was to display the microarray analysis results for the subgingival biofilm samples taken from patient cases diagnosed with different forms of periodontitis. A total of 84 patients with generalized aggressive periodontitis (GAP,n:29), generalized chronic periodontitis (GCP, n:25), peri-implantitis (PI,n:14), localized aggressive periodontitis (LAP,n:8) and refractory chronic periodontitis (RP,n:8) were consecutively selected from the archives of the Oral Microbiological Diagnostic Laboratory. The subgingival biofilm samples were analyzed by the microarray-based identification of 10 selected species. All the tested species were detected in the samples. The red complex bacteria were the most prevalent with very high levels in all groups. Fusobacterium nucleatum was detected in all samples at high levels. The green and blue complex bacteria were less prevalent compared with red and orange complex, except Aggregatibacter actinomycetemcomitas was detected in all LAP group. Positive correlations were found within all the red complex bacteria and between red and orange complex bacteria especially in GCP and GAP groups. Parocheck enables to monitoring of periodontal pathogens in all forms of periodontal disease and can be alternative to other guiding and reliable microbiologic tests. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Microarray labeling extension values: laboratory signatures for Affymetrix GeneChips

    Science.gov (United States)

    Lee, Yun-Shien; Chen, Chun-Houh; Tsai, Chi-Neu; Tsai, Chia-Lung; Chao, Angel; Wang, Tzu-Hao

    2009-01-01

    Interlaboratory comparison of microarray data, even when using the same platform, imposes several challenges to scientists. RNA quality, RNA labeling efficiency, hybridization procedures and data-mining tools can all contribute variations in each laboratory. In Affymetrix GeneChips, about 11–20 different 25-mer oligonucleotides are used to measure the level of each transcript. Here, we report that ‘labeling extension values (LEVs)’, which are correlation coefficients between probe intensities and probe positions, are highly correlated with the gene expression levels (GEVs) on eukayotic Affymetrix microarray data. By analyzing LEVs and GEVs in the publicly available 2414 cel files of 20 Affymetrix microarray types covering 13 species, we found that correlations between LEVs and GEVs only exist in eukaryotic RNAs, but not in prokaryotic ones. Surprisingly, Affymetrix results of the same specimens that were analyzed in different laboratories could be clearly differentiated only by LEVs, leading to the identification of ‘laboratory signatures’. In the examined dataset, GSE10797, filtering out high-LEV genes did not compromise the discovery of biological processes that are constructed by differentially expressed genes. In conclusion, LEVs provide a new filtering parameter for microarray analysis of gene expression and it may improve the inter- and intralaboratory comparability of Affymetrix GeneChips data. PMID:19295132

  6. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  7. Improving Intervention Decisions to Prevent Genocide: Less Muddle, More Structure

    Directory of Open Access Journals (Sweden)

    Robin Gregory

    2018-03-01

    Full Text Available Decisions to intervene in a foreign country to prevent genocide and mass atrocities are among the most challenging and controversial choices facing national leaders. Drawing on techniques from decision analysis, psychology, and negotiation analysis, we propose a structured approach to these difficult choices that can provide policy makers with additional insight, consistency, efficiency, and defensibility. We propose the use of a values-based framework to clarify the key elements of these complex choices and to provide a consistent structure for comparison of the likely benefits, risks, and tradeoffs associated with alternative intervention strategies. Results from a workshop involving Ambassadors and experienced policy makers provide a first test of this new method for clarifying intervention choices. A decision-aiding framework is shown to improve the clarity and relevance of intervention deliberations, laying the groundwork for a more comprehensive and clearer understanding of the threats and opportunities associated with various intervention options.

  8. How Feedback Can Improve Managerial Evaluations of Model-based Marketing Decision Support Systems

    NARCIS (Netherlands)

    U. Kayande (Ujwal); A. de Bruyn (Arnoud); G.L. Lilien (Gary); A. Rangaswamy (Arvind); G.H. van Bruggen (Gerrit)

    2006-01-01

    textabstractMarketing managers often provide much poorer evaluations of model-based marketing decision support systems (MDSSs) than are warranted by the objective performance of those systems. We show that a reason for this discrepant evaluation may be that MDSSs are often not designed to help users

  9. Deep brain stimulation of the subthalamic nucleus improves reward-based decision-learning in Parkinson’s disease

    NARCIS (Netherlands)

    Wouwe, N.C. van; Ridderinkhof, K.R.; Wildenberg, W.P.M. van den; Band, G.P.H.; Abisogun, A.; Elias, W.J.; Frysinger, R.; Wylie, S.A.

    2011-01-01

    Recently, the subthalamic nucleus (STN) has been shown to be critically involved in decision-making, action selection, and motor control. Here we investigate the effect of deep brain stimulation (DBS) of the STN on reward-based decision-learning in patients diagnosed with Parkinson’s disease (PD).

  10. DNA Microarray Technology; TOPICAL

    International Nuclear Information System (INIS)

    WERNER-WASHBURNE, MARGARET; DAVIDSON, GEORGE S.

    2002-01-01

    Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects

  11. Data Integration for Microarrays: Enhanced Inference for Gene Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Alina Sîrbu

    2015-05-01

    Full Text Available Microarray technologies have been the basis of numerous important findings regarding gene expression in the few last decades. Studies have generated large amounts of data describing various processes, which, due to the existence of public databases, are widely available for further analysis. Given their lower cost and higher maturity compared to newer sequencing technologies, these data continue to be produced, even though data quality has been the subject of some debate. However, given the large volume of data generated, integration can help overcome some issues related, e.g., to noise or reduced time resolution, while providing additional insight on features not directly addressed by sequencing methods. Here, we present an integration test case based on public Drosophila melanogaster datasets (gene expression, binding site affinities, known interactions. Using an evolutionary computation framework, we show how integration can enhance the ability to recover transcriptional gene regulatory networks from these data, as well as indicating which data types are more important for quantitative and qualitative network inference. Our results show a clear improvement in performance when multiple datasets are integrated, indicating that microarray data will remain a valuable and viable resource for some time to come.

  12. Data Integration for Microarrays: Enhanced Inference for Gene Regulatory Networks.

    Science.gov (United States)

    Sîrbu, Alina; Crane, Martin; Ruskin, Heather J

    2015-05-14

    Microarray technologies have been the basis of numerous important findings regarding gene expression in the few last decades. Studies have generated large amounts of data describing various processes, which, due to the existence of public databases, are widely available for further analysis. Given their lower cost and higher maturity compared to newer sequencing technologies, these data continue to be produced, even though data quality has been the subject of some debate. However, given the large volume of data generated, integration can help overcome some issues related, e.g., to noise or reduced time resolution, while providing additional insight on features not directly addressed by sequencing methods. Here, we present an integration test case based on public Drosophila melanogaster datasets (gene expression, binding site affinities, known interactions). Using an evolutionary computation framework, we show how integration can enhance the ability to recover transcriptional gene regulatory networks from these data, as well as indicating which data types are more important for quantitative and qualitative network inference. Our results show a clear improvement in performance when multiple datasets are integrated, indicating that microarray data will remain a valuable and viable resource for some time to come.

  13. Deciding on child maltreatment: A literature review on methods that improve decision-making.

    Science.gov (United States)

    Bartelink, Cora; van Yperen, Tom A; ten Berge, Ingrid J

    2015-11-01

    Assessment and decision-making in child maltreatment cases is difficult. Practitioners face many uncertainties and obstacles during their assessment and decision-making process. Research exhibits shortcomings in this decision-making process. The purpose of this literature review is to identify and discuss methods to overcome these shortcomings. We conducted a systematic review of the published literature on decision-making using PsychINFO and MEDLINE from 2000 through May 2014. We included reviews and quantitative research studies that investigated methods aimed at improving professional decision-making on child abuse and neglect in child welfare and child protection. Although many researchers have published articles on decision-making including ideas and theories to improve professional decision-making, empirical research on these improvements is scarce. Available studies have shown promising results. Structured decision-making has created a greater child-centred and holistic approach that takes the child's family and environment into account, which has made practitioners work more systematically and improved the analysis of complex situations. However, this approach has not improved inter-rater agreement on decisions made. Shared decision-making may improve the participation of parents and children and the quality of decisions by taking client treatment preferences into account in addition to scientific evidence and clinical experience. A number of interesting developments appear in recent research literature; however, child welfare and child protection must find additional inspiration from other areas, e.g., mental health services, because research on decision-making processes in child welfare and child protection is still rare. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Normalization for triple-target microarray experiments

    Directory of Open Access Journals (Sweden)

    Magniette Frederic

    2008-04-01

    Full Text Available Abstract Background Most microarray studies are made using labelling with one or two dyes which allows the hybridization of one or two samples on the same slide. In such experiments, the most frequently used dyes are Cy3 and Cy5. Recent improvements in the technology (dye-labelling, scanner and, image analysis allow hybridization up to four samples simultaneously. The two additional dyes are Alexa488 and Alexa494. The triple-target or four-target technology is very promising, since it allows more flexibility in the design of experiments, an increase in the statistical power when comparing gene expressions induced by different conditions and a scaled down number of slides. However, there have been few methods proposed for statistical analysis of such data. Moreover the lowess correction of the global dye effect is available for only two-color experiments, and even if its application can be derived, it does not allow simultaneous correction of the raw data. Results We propose a two-step normalization procedure for triple-target experiments. First the dye bleeding is evaluated and corrected if necessary. Then the signal in each channel is normalized using a generalized lowess procedure to correct a global dye bias. The normalization procedure is validated using triple-self experiments and by comparing the results of triple-target and two-color experiments. Although the focus is on triple-target microarrays, the proposed method can be used to normalize p differently labelled targets co-hybridized on a same array, for any value of p greater than 2. Conclusion The proposed normalization procedure is effective: the technical biases are reduced, the number of false positives is under control in the analysis of differentially expressed genes, and the triple-target experiments are more powerful than the corresponding two-color experiments. There is room for improving the microarray experiments by simultaneously hybridizing more than two samples.

  15. Transcription analysis of apple fruit development using cDNA microarrays

    NARCIS (Netherlands)

    Soglio, V.; Costa, F.; Molthoff, J.W.; Weemen-Hendriks, M.; Schouten, H.J.; Gianfranceschi, L.

    2009-01-01

    The knowledge of the molecular mechanisms underlying fruit quality traits is fundamental to devise efficient marker-assisted selection strategies and to improve apple breeding. In this study, cDNA microarray technology was used to identify genes whose expression changes during fruit development and

  16. Application of a New Genetic Deafness Microarray for Detecting Mutations in the Deaf in China.

    Directory of Open Access Journals (Sweden)

    Hong Wu

    Full Text Available The aim of this study was to evaluate the GoldenGate microarray as a diagnostic tool and to elucidate the contribution of the genes on this array to the development of both nonsyndromic and syndromic sensorineural hearing loss in China.We developed a microarray to detect 240 mutations underlying syndromic and nonsyndromic sensorineural hearing loss. The microarray was then used for analysis of 382 patients with nonsyndromic sensorineural hearing loss (including 15 patients with enlarged vestibular aqueduct syndrome, 21 patients with Waardenburg syndrome, and 60 unrelated controls. Subsequently, we analyzed the sensitivity, specificity, and reproducibility of this new approach after Sanger sequencing-based verification, and also determined the contribution of the genes on this array to the development of distinct hearing disorders.The sensitivity and specificity of the microarray chip were 98.73% and 98.34%, respectively. Genetic defects were identified in 61.26% of the patients with nonsyndromic sensorineural hearing loss, and 9 causative genes were identified. The molecular etiology was confirmed in 19.05% and 46.67% of the patients with Waardenburg syndrome and enlarged vestibular aqueduct syndrome, respectively.Our new mutation-based microarray comprises an accurate and comprehensive genetic tool for the detection of sensorineural hearing loss. This microarray-based detection method could serve as a first-pass screening (before next-generation-sequencing screening for deafness-causing mutations in China.

  17. A Customized DNA Microarray for Microbial Source Tracking ...

    Science.gov (United States)

    It is estimated that more than 160, 000 miles of rivers and streams in the United States are impaired due to the presence of waterborne pathogens. These pathogens typically originate from human and other animal fecal pollution sources; therefore, a rapid microbial source tracking (MST) method is needed to facilitate water quality assessment and impaired water remediation. We report a novel qualitative DNA microarray technology consisting of 453 probes for the detection of general fecal and host-associated bacteria, viruses, antibiotic resistance, and other environmentally relevant genetic indicators. A novel data normalization and reduction approach is also presented to help alleviate false positives often associated with high-density microarray applications. To evaluate the performance of the approach, DNA and cDNA was isolated from swine, cattle, duck, goose and gull fecal reference samples, as well as soiled poultry liter and raw municipal sewage. Based on nonmetric multidimensional scaling analysis of results, findings suggest that the novel microarray approach may be useful for pathogen detection and identification of fecal contamination in recreational waters. The ability to simultaneously detect a large collection of environmentally important genetic indicators in a single test has the potential to provide water quality managers with a wide range of information in a short period of time. Future research is warranted to measure microarray performance i

  18. BIOPHYSICAL PROPERTIES OF NUCLEIC ACIDS AT SURFACES RELEVANT TO MICROARRAY PERFORMANCE

    OpenAIRE

    Rao, Archana N.; Grainger, David W.

    2014-01-01

    Both clinical and analytical metrics produced by microarray-based assay technology have recognized problems in reproducibility, reliability and analytical sensitivity. These issues are often attributed to poor understanding and control of nucleic acid behaviors and properties at solid-liquid interfaces. Nucleic acid hybridization, central to DNA and RNA microarray formats, depends on the properties and behaviors of single strand (ss) nucleic acids (e.g., probe oligomeric DNA) bound to surface...

  19. Microarray-based analysis of plasma cirDNA epigenetic modification profiling in xenografted mice exposed to intermittent hypoxia

    Directory of Open Access Journals (Sweden)

    Rene Cortese

    2015-09-01

    Full Text Available Intermittent hypoxia (IH during sleep is one of the major abnormalities occurring in patients suffering from obstructive sleep apnea (OSA, a highly prevalent disorder affecting 6–15% of the general population, particularly among obese people. IH has been proposed as a major determinant of oncogenetically-related processes such as tumor growth, invasion and metastasis. During the growth and expansion of tumors, fragmented DNA is released into the bloodstream and enters the circulation. Circulating tumor DNA (cirDNA conserves the genetic and epigenetic profiles from the tumor of origin and can be isolated from the plasma fraction. Here we report a microarray-based epigenetic profiling of cirDNA isolated from blood samples of mice engrafted with TC1 epithelial lung cancer cells and controls, which were exposed to IH during sleep (XenoIH group, n = 3 or control conditions, (i.e., room air (RA; XenoRA group, n = 3 conditions. To prepare the targets for microarray hybridization, we applied a previously developed method that enriches the modified fraction of the cirDNA without amplification of genomic DNA. Regions of differential cirDNA modification between the two groups were identified by hybridizing the enriched fractions for each sample to Affymetrix GeneChip Human Promoter Arrays 1.0R. Microarray raw and processed data were deposited in NCBI's Gene Expression Omnibus (GEO database (accession number: GSE61070.

  20. Quantitative miRNA expression analysis: comparing microarrays with next-generation sequencing

    DEFF Research Database (Denmark)

    Willenbrock, Hanni; Salomon, Jesper; Søkilde, Rolf

    2009-01-01

    Recently, next-generation sequencing has been introduced as a promising, new platform for assessing the copy number of transcripts, while the existing microarray technology is considered less reliable for absolute, quantitative expression measurements. Nonetheless, so far, results from the two...... technologies have only been compared based on biological data, leading to the conclusion that, although they are somewhat correlated, expression values differ significantly. Here, we use synthetic RNA samples, resembling human microRNA samples, to find that microarray expression measures actually correlate...... better with sample RNA content than expression measures obtained from sequencing data. In addition, microarrays appear highly sensitive and perform equivalently to next-generation sequencing in terms of reproducibility and relative ratio quantification....

  1. AMDA: an R package for the automated microarray data analysis

    Directory of Open Access Journals (Sweden)

    Foti Maria

    2006-07-01

    Full Text Available Abstract Background Microarrays are routinely used to assess mRNA transcript levels on a genome-wide scale. Large amount of microarray datasets are now available in several databases, and new experiments are constantly being performed. In spite of this fact, few and limited tools exist for quickly and easily analyzing the results. Microarray analysis can be challenging for researchers without the necessary training and it can be time-consuming for service providers with many users. Results To address these problems we have developed an automated microarray data analysis (AMDA software, which provides scientists with an easy and integrated system for the analysis of Affymetrix microarray experiments. AMDA is free and it is available as an R package. It is based on the Bioconductor project that provides a number of powerful bioinformatics and microarray analysis tools. This automated pipeline integrates different functions available in the R and Bioconductor projects with newly developed functions. AMDA covers all of the steps, performing a full data analysis, including image analysis, quality controls, normalization, selection of differentially expressed genes, clustering, correspondence analysis and functional evaluation. Finally a LaTEX document is dynamically generated depending on the performed analysis steps. The generated report contains comments and analysis results as well as the references to several files for a deeper investigation. Conclusion AMDA is freely available as an R package under the GPL license. The package as well as an example analysis report can be downloaded in the Services/Bioinformatics section of the Genopolis http://www.genopolis.it/

  2. A comparison of web-based versus print-based decision AIDS for prostate cancer screening: participants' evaluation and utilization.

    Science.gov (United States)

    Tomko, Catherine; Davis, Kimberly M; Luta, George; Krist, Alexander H; Woolf, Steven H; Taylor, Kathryn L

    2015-01-01

    Patient decision aids facilitate informed decision making for medical tests and procedures that have uncertain benefits. To describe participants' evaluation and utilization of print-based and web-based prostate cancer screening decision aids that were found to improve decisional outcomes in a prior randomized controlled trial. Men completed brief telephone interviews at baseline, one month, and 13 months post-randomization. Participants were primary care patients, 45-70 years old, who received the print-based (N = 628) or web-based decision aid (N = 625) and completed the follow-up assessments. We assessed men's baseline preference for web-based or print-based materials, time spent using the decision aids, comprehension of the overall message, and ratings of the content. Decision aid use was self-reported by 64.3 % (web) and 81.8 % (print) of participants. Significant predictors of decision aid use were race (white vs. non-white, OR = 2.43, 95 % CI: 1.77, 3.35), higher education (OR = 1.68, 95 % CI: 1.06, 2.70) and trial arm (print vs. web, OR = 2.78, 95 % CI: 2.03, 3.83). Multivariable analyses indicated that web-arm participants were more likely to use the website when they preferred web-based materials (OR: 1.91, CI: 1.17, 3.12), whereas use of the print materials was not significantly impacted by a preference for print-based materials (OR: 0.69, CI: 0.38, 1.25). Comprehension of the decision aid message (i.e., screening is an individual decision) did not significantly differ between arms in adjusted analyses (print: 61.9 % and web: 68.2 %, p = 0.42). Decision aid use was independently influenced by race, education, and the decision aid medium, findings consistent with the 'digital divide.' These results suggest that when it is not possible to provide this age cohort with their preferred decision aid medium, print materials will be more highly used than web-based materials. Although there are many advantages to web-based decision aids, providing an option for

  3. Improving clinical decision support using data mining techniques

    Science.gov (United States)

    Burn-Thornton, Kath E.; Thorpe, Simon I.

    1999-02-01

    Physicians, in their ever-demanding jobs, are looking to decision support systems for aid in clinical diagnosis. However, clinical decision support systems need to be of sufficiently high accuracy that they help, rather than hinder, the physician in his/her diagnosis. Decision support systems with accuracies, of patient state determination, of greater than 80 percent, are generally perceived to be sufficiently accurate to fulfill the role of helping the physician. We have previously shown that data mining techniques have the potential to provide the underpinning technology for clinical decision support systems. In this paper, an extension of the work in reverence 2, we describe how changes in data mining methodologies, for the analysis of 12-lead ECG data, improve the accuracy by which data mining algorithms determine which patients are suffering from heart disease. We show that the accuracy of patient state prediction, for all the algorithms, which we investigated, can be increased by up to 6 percent, using the combination of appropriate test training ratios and 5-fold cross-validation. The use of cross-validation greater than 5-fold, appears to reduce the improvement in algorithm classification accuracy gained by the use of this validation method. The accuracy of 84 percent in patient state predictions, obtained using the algorithm OCI, suggests that this algorithm will be capable of providing the required accuracy for clinical decision support systems.

  4. Enhancing emotion-based learning in decision-making under uncertainty.

    Science.gov (United States)

    Alarcón, David; Amián, Josué G; Sánchez-Medina, José A

    2015-01-01

    The Iowa Gambling Task (IGT) is widely used to study decision-making differences between several clinical and healthy populations. Unlike the healthy participants, clinical participants have difficulty choosing between advantageous options, which yield long-term benefits, and disadvantageous options, which give high immediate rewards but lead to negative profits. However, recent studies have found that healthy participants avoid the options with a higher frequency of losses regardless of whether or not they are profitable in the long run. The aim of this study was to control for the confounding effect of the frequency of losses between options to improve the performance of healthy participants on the IGT. Eighty healthy participants were randomly assigned to the original IGT or a modified version of the IGT that diminished the gap in the frequency of losses between options. The participants who used the modified IGT version learned to make better decisions based on long-term profit, as indicated by an earlier ability to discriminate good from bad options, and took less time to make their choices. This research represents an advance in the study of decision making under uncertainty by showing that emotion-based learning is improved by controlling for the loss-frequency bias effect.

  5. Advances in the application of decision theory to test-based decision making

    NARCIS (Netherlands)

    van der Linden, Willem J.

    This paper reviews recent research in the Netherlands on the application of decision theory to test-based decision making about personnel selection and student placement. The review is based on an earlier model proposed for the classification of decision problems, and emphasizes an empirical

  6. Tactical decision games - developing scenario-based training for decision-making in distributed teams

    OpenAIRE

    Lauche, K.; Crichton, M.; Bayerl, P.S.

    2009-01-01

    Team training should reflect the increasing complexity of decision-making environments. Guidelines for scenario-based training were adopted for a distributed setting and tested in a pilot training session with a distributed team in the offshore oil industry. Participants valued the scenario as challenging and useful, but also highlighted problems of distributed communication. The findings were used to improve the training as well as current use of the technology in the organisation. Research ...

  7. Controlling Chronic Diseases Through Evidence-Based Decision Making: A Group-Randomized Trial.

    Science.gov (United States)

    Brownson, Ross C; Allen, Peg; Jacob, Rebekah R; deRuyter, Anna; Lakshman, Meenakshi; Reis, Rodrigo S; Yan, Yan

    2017-11-30

    Although practitioners in state health departments are ideally positioned to implement evidence-based interventions, few studies have examined how to build their capacity to do so. The objective of this study was to explore how to increase the use of evidence-based decision-making processes at both the individual and organization levels. We conducted a 2-arm, group-randomized trial with baseline data collection and follow-up at 18 to 24 months. Twelve state health departments were paired and randomly assigned to intervention or control condition. In the 6 intervention states, a multiday training on evidence-based decision making was conducted from March 2014 through March 2015 along with a set of supplemental capacity-building activities. Individual-level outcomes were evidence-based decision making skills of public health practitioners; organization-level outcomes were access to research evidence and participatory decision making. Mixed analysis of covariance models was used to evaluate the intervention effect by accounting for the cluster randomized trial design. Analysis was performed from March through May 2017. Participation 18 to 24 months after initial training was 73.5%. In mixed models adjusted for participant and state characteristics, the intervention group improved significantly in the overall skill gap (P = .01) and in 6 skill areas. Among the 4 organizational variables, only access to evidence and skilled staff showed an intervention effect (P = .04). Tailored and active strategies are needed to build capacity at the individual and organization levels for evidence-based decision making. Our study suggests several dissemination interventions for consideration by leaders seeking to improve public health practice.

  8. Preoperative overnight parenteral nutrition (TPN) improves skeletal muscle protein metabolism indicated by microarray algorithm analyses in a randomized trial.

    Science.gov (United States)

    Iresjö, Britt-Marie; Engström, Cecilia; Lundholm, Kent

    2016-06-01

    Loss of muscle mass is associated with increased risk of morbidity and mortality in hospitalized patients. Uncertainties of treatment efficiency by short-term artificial nutrition remain, specifically improvement of protein balance in skeletal muscles. In this study, algorithmic microarray analysis was applied to map cellular changes related to muscle protein metabolism in human skeletal muscle tissue during provision of overnight preoperative total parenteral nutrition (TPN). Twenty-two patients (11/group) scheduled for upper GI surgery due to malignant or benign disease received a continuous peripheral all-in-one TPN infusion (30 kcal/kg/day, 0.16 gN/kg/day) or saline infusion for 12 h prior operation. Biopsies from the rectus abdominis muscle were taken at the start of operation for isolation of muscle RNA RNA expression microarray analyses were performed with Agilent Sureprint G3, 8 × 60K arrays using one-color labeling. 447 mRNAs were differently expressed between study and control patients (P nutrition; particularly anabolic signaling S6K1 (P parenteral nutrition is effective to promote muscle protein metabolism. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  9. The laboratory-clinician team: a professional call to action to improve communication and collaboration for optimal patient care in chromosomal microarray testing.

    Science.gov (United States)

    Wain, Karen E; Riggs, Erin; Hanson, Karen; Savage, Melissa; Riethmaier, Darlene; Muirhead, Andrea; Mitchell, Elyse; Packard, Bethanny Smith; Faucett, W Andrew

    2012-10-01

    The International Standards for Cytogenomic Arrays (ISCA) Consortium is a worldwide collaborative effort dedicated to optimizing patient care by improving the quality of chromosomal microarray testing. The primary effort of the ISCA Consortium has been the development of a database of copy number variants (CNVs) identified during the course of clinical microarray testing. This database is a powerful resource for clinicians, laboratories, and researchers, and can be utilized for a variety of applications, such as facilitating standardized interpretations of certain CNVs across laboratories or providing phenotypic information for counseling purposes when published data is sparse. A recognized limitation to the clinical utility of this database, however, is the quality of clinical information available for each patient. Clinical genetic counselors are uniquely suited to facilitate the communication of this information to the laboratory by virtue of their existing clinical responsibilities, case management skills, and appreciation of the evolving nature of scientific knowledge. We intend to highlight the critical role that genetic counselors play in ensuring optimal patient care through contributing to the clinical utility of the ISCA Consortium's database, as well as the quality of individual patient microarray reports provided by contributing laboratories. Current tools, paper and electronic forms, created to maximize this collaboration are shared. In addition to making a professional commitment to providing complete clinical information, genetic counselors are invited to become ISCA members and to become involved in the discussions and initiatives within the Consortium.

  10. Uses of Dendrimers for DNA Microarrays

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Majoral

    2006-08-01

    Full Text Available Biosensors such as DNA microarrays and microchips are gaining an increasingimportance in medicinal, forensic, and environmental analyses. Such devices are based onthe detection of supramolecular interactions called hybridizations that occur betweencomplementary oligonucleotides, one linked to a solid surface (the probe, and the other oneto be analyzed (the target. This paper focuses on the improvements that hyperbranched andperfectly defined nanomolecules called dendrimers can provide to this methodology. Twomain uses of dendrimers for such purpose have been described up to now; either thedendrimer is used as linker between the solid surface and the probe oligonucleotide, or thedendrimer is used as a multilabeled entity linked to the target oligonucleotide. In the firstcase the dendrimer generally induces a higher loading of probes and an easier hybridization,due to moving away the solid phase. In the second case the high number of localized labels(generally fluorescent induces an increased sensitivity, allowing the detection of smallquantities of biological entities.

  11. Decision support tools in conservation: a workshop to improve user-centred design

    Directory of Open Access Journals (Sweden)

    David Rose

    2017-09-01

    Full Text Available A workshop held at the University of Cambridge in May 2017 brought developers, researchers, knowledge brokers, and users together to discuss user-centred design of decision support tools. Decision support tools are designed to take users through logical decision steps towards an evidence-informed final decision. Although they may exist in different forms, including on paper, decision support tools are generally considered to be computer- (online, software or app-based. Studies have illustrated the potential value of decision support tools for conservation, and there are several papers describing the design of individual tools. Rather less attention, however, has been placed on the desirable characteristics for use, and even less on whether tools are actually being used in practice. This is concerning because if tools are not used by their intended end user, for example a policy-maker or practitioner, then its design will have wasted resources. Based on an analysis of papers on tool use in conservation, there is a lack of social science research on improving design, and relatively few examples where users have been incorporated into the design process. Evidence from other disciplines, particularly human-computer interaction research, illustrates that involving users throughout the design of decision support tools increases the relevance, usability, and impact of systems. User-centred design of tools is, however, seldom mentioned in the conservation literature. The workshop started the necessary process of bringing together developers and users to share knowledge about how to conduct good user-centred design of decision support tools. This will help to ensure that tools are usable and make an impact in conservation policy and practice.

  12. Microarray analysis in the archaeon Halobacterium salinarum strain R1.

    Directory of Open Access Journals (Sweden)

    Jens Twellmeyer

    Full Text Available BACKGROUND: Phototrophy of the extremely halophilic archaeon Halobacterium salinarum was explored for decades. The research was mainly focused on the expression of bacteriorhodopsin and its functional properties. In contrast, less is known about genome wide transcriptional changes and their impact on the physiological adaptation to phototrophy. The tool of choice to record transcriptional profiles is the DNA microarray technique. However, the technique is still rarely used for transcriptome analysis in archaea. METHODOLOGY/PRINCIPAL FINDINGS: We developed a whole-genome DNA microarray based on our sequence data of the Hbt. salinarum strain R1 genome. The potential of our tool is exemplified by the comparison of cells growing under aerobic and phototrophic conditions, respectively. We processed the raw fluorescence data by several stringent filtering steps and a subsequent MAANOVA analysis. The study revealed a lot of transcriptional differences between the two cell states. We found that the transcriptional changes were relatively weak, though significant. Finally, the DNA microarray data were independently verified by a real-time PCR analysis. CONCLUSION/SIGNIFICANCE: This is the first DNA microarray analysis of Hbt. salinarum cells that were actually grown under phototrophic conditions. By comparing the transcriptomics data with current knowledge we could show that our DNA microarray tool is well applicable for transcriptome analysis in the extremely halophilic archaeon Hbt. salinarum. The reliability of our tool is based on both the high-quality array of DNA probes and the stringent data handling including MAANOVA analysis. Among the regulated genes more than 50% had unknown functions. This underlines the fact that haloarchaeal phototrophy is still far away from being completely understood. Hence, the data recorded in this study will be subject to future systems biology analysis.

  13. Mining meiosis and gametogenesis with DNA microarrays.

    Science.gov (United States)

    Schlecht, Ulrich; Primig, Michael

    2003-04-01

    Gametogenesis is a key developmental process that involves complex transcriptional regulation of numerous genes including many that are conserved between unicellular eukaryotes and mammals. Recent expression-profiling experiments using microarrays have provided insight into the co-ordinated transcription of several hundred genes during mitotic growth and meiotic development in budding and fission yeast. Furthermore, microarray-based studies have identified numerous loci that are regulated during the cell cycle or expressed in a germ-cell specific manner in eukaryotic model systems like Caenorhabditis elegans, Mus musculus as well as Homo sapiens. The unprecedented amount of information produced by post-genome biology has spawned novel approaches to organizing biological knowledge using currently available information technology. This review outlines experiments that contribute to an emerging comprehensive picture of the molecular machinery governing sexual reproduction in eukaryotes.

  14. Microarray-based genomic surveying of gene polymorphisms in Chlamydia trachomatis

    OpenAIRE

    Brunelle, Brian W; Nicholson, Tracy L; Stephens, Richard S

    2004-01-01

    By comparing two fully sequenced genomes of Chlamydia trachomatis using competitive hybridization on DNA microarrays, a logarithmic correlation was demonstrated between the signal ratio of the arrays and the 75-99% range of nucleotide identities of the genes. Variable genes within 14 uncharacterized strains of C. trachomatis were identified by array analysis and verified by DNA sequencing. These genes may be crucial for understanding chlamydial virulence and pathogenesis.

  15. Maintenance planning support method for nuclear power plants based on collective decision making

    International Nuclear Information System (INIS)

    Shimizu, Shunichi; Sakurai, Shoji; Takaoka, Kazushi; Kanemoto, Shigeru; Fukutomi, Shigeki

    1992-01-01

    Inspection and maintenance planning in nuclear power plants is conducted by decision making based on experts' collective consensus. However, since a great deal of time and effort is required to reach a consensus among expert judgments, the establishment of effective decision making methods is necessary. Therefore, the authors developed a method for supporting collective decision making, based on a combination of three types of decision making methods; the Characteristic Diagram method, Interpretative Structural Modeling method, and the Analytic Hierarchy Process method. The proposed method enables us to determine the evaluation criteria systematically for collective decision making, and also allows extracting collective decisions using simplified questionnaires. The proposed method can support reaching a consensus of groups effectively through the evaluation of collective decision structural models and their characteristics. In this paper, the effectiveness of the proposed method was demonstrated through its application to the decision making problem concerning whether or not the improved ultrasonic testing equipment should be adopted at nuclear power plants. (author)

  16. Evaluation of toxicity of the mycotoxin citrinin using yeast ORF DNA microarray and Oligo DNA microarray

    Directory of Open Access Journals (Sweden)

    Nobumasa Hitoshi

    2007-04-01

    Full Text Available Abstract Background Mycotoxins are fungal secondary metabolites commonly present in feed and food, and are widely regarded as hazardous contaminants. Citrinin, one of the very well known mycotoxins that was first isolated from Penicillium citrinum, is produced by more than 10 kinds of fungi, and is possibly spread all over the world. However, the information on the action mechanism of the toxin is limited. Thus, we investigated the citrinin-induced genomic response for evaluating its toxicity. Results Citrinin inhibited growth of yeast cells at a concentration higher than 100 ppm. We monitored the citrinin-induced mRNA expression profiles in yeast using the ORF DNA microarray and Oligo DNA microarray, and the expression profiles were compared with those of the other stress-inducing agents. Results obtained from both microarray experiments clustered together, but were different from those of the mycotoxin patulin. The oxidative stress response genes – AADs, FLR1, OYE3, GRE2, and MET17 – were significantly induced. In the functional category, expression of genes involved in "metabolism", "cell rescue, defense and virulence", and "energy" were significantly activated. In the category of "metabolism", genes involved in the glutathione synthesis pathway were activated, and in the category of "cell rescue, defense and virulence", the ABC transporter genes were induced. To alleviate the induced stress, these cells might pump out the citrinin after modification with glutathione. While, the citrinin treatment did not induce the genes involved in the DNA repair. Conclusion Results from both microarray studies suggest that citrinin treatment induced oxidative stress in yeast cells. The genotoxicity was less severe than the patulin, suggesting that citrinin is less toxic than patulin. The reproducibility of the expression profiles was much better with the Oligo DNA microarray. However, the Oligo DNA microarray did not completely overcome cross

  17. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  18. Reverse phase protein microarray technology in traumatic brain injury.

    Science.gov (United States)

    Gyorgy, Andrea B; Walker, John; Wingo, Dan; Eidelman, Ofer; Pollard, Harvey B; Molnar, Andras; Agoston, Denes V

    2010-09-30

    Antibody based, high throughput proteomics technology represents an exciting new approach in understanding the pathobiologies of complex disorders such as cancer, stroke and traumatic brain injury. Reverse phase protein microarray (RPPA) can complement the classical methods based on mass spectrometry as a high throughput validation and quantification method. RPPA technology can address problematic issues, such as sample complexity, sensitivity, quantification, reproducibility and throughput, which are currently associated with mass spectrometry-based approaches. However, there are technical challenges, predominantly associated with the selection and use of antibodies, preparation and representation of samples and with analyzing and quantifying primary RPPA data. Here we present ways to identify and overcome some of the current issues associated with RPPA. We believe that using stringent quality controls, improved bioinformatics analysis and interpretation of primary RPPA data, this method will significantly contribute in generating new level of understanding about complex disorders at the level of systems biology. Published by Elsevier B.V.

  19. Comparison of gene coverage of mouse oligonucleotide microarray platforms

    Directory of Open Access Journals (Sweden)

    Medrano Juan F

    2006-03-01

    reveals that the commercial microarray Sentrix, which is based on the MEEBO public oligoset, showed the best mouse genome coverage currently available. We also suggest the creation of guidelines to standardize the minimum set of information that vendors should provide to allow researchers to accurately evaluate the advantages and disadvantages of using a given platform.

  20. Workflows for microarray data processing in the Kepler environment

    Science.gov (United States)

    2012-01-01

    R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. Conclusions We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services. PMID:22594911

  1. Workflows for microarray data processing in the Kepler environment

    Directory of Open Access Journals (Sweden)

    Stropp Thomas

    2012-05-01

    traditional shell scripting or R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. Conclusions We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.

  2. Workflows for microarray data processing in the Kepler environment.

    Science.gov (United States)

    Stropp, Thomas; McPhillips, Timothy; Ludäscher, Bertram; Bieda, Mark

    2012-05-17

    /BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.

  3. Advanced microarray technologies for clinical diagnostics

    NARCIS (Netherlands)

    Pierik, Anke

    2011-01-01

    DNA microarrays become increasingly important in the field of clinical diagnostics. These microarrays, also called DNA chips, are small solid substrates, typically having a maximum surface area of a few cm2, onto which many spots are arrayed in a pre-determined pattern. Each of these spots contains

  4. BioCichlid: central dogma-based 3D visualization system of time-course microarray data on a hierarchical biological network.

    Science.gov (United States)

    Ishiwata, Ryosuke R; Morioka, Masaki S; Ogishima, Soichi; Tanaka, Hiroshi

    2009-02-15

    BioCichlid is a 3D visualization system of time-course microarray data on molecular networks, aiming at interpretation of gene expression data by transcriptional relationships based on the central dogma with physical and genetic interactions. BioCichlid visualizes both physical (protein) and genetic (regulatory) network layers, and provides animation of time-course gene expression data on the genetic network layer. Transcriptional regulations are represented to bridge the physical network (transcription factors) and genetic network (regulated genes) layers, thus integrating promoter analysis into the pathway mapping. BioCichlid enhances the interpretation of microarray data and allows for revealing the underlying mechanisms causing differential gene expressions. BioCichlid is freely available and can be accessed at http://newton.tmd.ac.jp/. Source codes for both biocichlid server and client are also available.

  5. Plant-pathogen interactions: what microarray tells about it?

    Science.gov (United States)

    Lodha, T D; Basak, J

    2012-01-01

    Plant defense responses are mediated by elementary regulatory proteins that affect expression of thousands of genes. Over the last decade, microarray technology has played a key role in deciphering the underlying networks of gene regulation in plants that lead to a wide variety of defence responses. Microarray is an important tool to quantify and profile the expression of thousands of genes simultaneously, with two main aims: (1) gene discovery and (2) global expression profiling. Several microarray technologies are currently in use; most include a glass slide platform with spotted cDNA or oligonucleotides. Till date, microarray technology has been used in the identification of regulatory genes, end-point defence genes, to understand the signal transduction processes underlying disease resistance and its intimate links to other physiological pathways. Microarray technology can be used for in-depth, simultaneous profiling of host/pathogen genes as the disease progresses from infection to resistance/susceptibility at different developmental stages of the host, which can be done in different environments, for clearer understanding of the processes involved. A thorough knowledge of plant disease resistance using successful combination of microarray and other high throughput techniques, as well as biochemical, genetic, and cell biological experiments is needed for practical application to secure and stabilize yield of many crop plants. This review starts with a brief introduction to microarray technology, followed by the basics of plant-pathogen interaction, the use of DNA microarrays over the last decade to unravel the mysteries of plant-pathogen interaction, and ends with the future prospects of this technology.

  6. Testing a Microarray to Detect and Monitor Toxic Microalgae in Arcachon Bay in France

    Directory of Open Access Journals (Sweden)

    Linda K. Medlin

    2013-03-01

    Full Text Available Harmful algal blooms (HABs occur worldwide, causing health problems and economic damages to fisheries and tourism. Monitoring agencies are therefore essential, yet monitoring is based only on time-consuming light microscopy, a level at which a correct identification can be limited by insufficient morphological characters. The project MIDTAL (Microarray Detection of Toxic Algae—an FP7-funded EU project—used rRNA genes (SSU and LSU as a target on microarrays to identify toxic species. Furthermore, toxins were detected with a newly developed multiplex optical Surface Plasmon Resonance biosensor (Multi SPR and compared with an enzyme-linked immunosorbent assay (ELISA. In this study, we demonstrate the latest generation of MIDTAL microarrays (version 3 and show the correlation between cell counts, detected toxin and microarray signals from field samples taken in Arcachon Bay in France in 2011. The MIDTAL microarray always detected more potentially toxic species than those detected by microscopic counts. The toxin detection was even more sensitive than both methods. Because of the universal nature of both toxin and species microarrays, they can be used to detect invasive species. Nevertheless, the MIDTAL microarray is not completely universal: first, because not all toxic species are on the chip, and second, because invasive species, such as Ostreopsis, already influence European coasts.

  7. Fuzzy support vector machine for microarray imbalanced data classification

    Science.gov (United States)

    Ladayya, Faroh; Purnami, Santi Wulan; Irhamah

    2017-11-01

    DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.

  8. Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.

    Science.gov (United States)

    Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J

    2008-06-18

    Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson

  9. DNA Microarray Technology

    Science.gov (United States)

    Skip to main content DNA Microarray Technology Enter Search Term(s): Español Research Funding An Overview Bioinformatics Current Grants Education and Training Funding Extramural Research News Features Funding Divisions Funding ...

  10. Automating dChip: toward reproducible sharing of microarray data analysis

    Directory of Open Access Journals (Sweden)

    Li Cheng

    2008-05-01

    Full Text Available Abstract Background During the past decade, many software packages have been developed for analysis and visualization of various types of microarrays. We have developed and maintained the widely used dChip as a microarray analysis software package accessible to both biologist and data analysts. However, challenges arise when dChip users want to analyze large number of arrays automatically and share data analysis procedures and parameters. Improvement is also needed when the dChip user support team tries to identify the causes of reported analysis errors or bugs from users. Results We report here implementation and application of the dChip automation module. Through this module, dChip automation files can be created to include menu steps, parameters, and data viewpoints to run automatically. A data-packaging function allows convenient transfer from one user to another of the dChip software, microarray data, and analysis procedures, so that the second user can reproduce the entire analysis session of the first user. An analysis report file can also be generated during an automated run, including analysis logs, user comments, and viewpoint screenshots. Conclusion The dChip automation module is a step toward reproducible research, and it can prompt a more convenient and reproducible mechanism for sharing microarray software, data, and analysis procedures and results. Automation data packages can also be used as publication supplements. Similar automation mechanisms could be valuable to the research community if implemented in other genomics and bioinformatics software packages.

  11. Bacterial identification and subtyping using DNA microarray and DNA sequencing.

    Science.gov (United States)

    Al-Khaldi, Sufian F; Mossoba, Magdi M; Allard, Marc M; Lienau, E Kurt; Brown, Eric D

    2012-01-01

    The era of fast and accurate discovery of biological sequence motifs in prokaryotic and eukaryotic cells is here. The co-evolution of direct genome sequencing and DNA microarray strategies not only will identify, isotype, and serotype pathogenic bacteria, but also it will aid in the discovery of new gene functions by detecting gene expressions in different diseases and environmental conditions. Microarray bacterial identification has made great advances in working with pure and mixed bacterial samples. The technological advances have moved beyond bacterial gene expression to include bacterial identification and isotyping. Application of new tools such as mid-infrared chemical imaging improves detection of hybridization in DNA microarrays. The research in this field is promising and future work will reveal the potential of infrared technology in bacterial identification. On the other hand, DNA sequencing by using 454 pyrosequencing is so cost effective that the promise of $1,000 per bacterial genome sequence is becoming a reality. Pyrosequencing technology is a simple to use technique that can produce accurate and quantitative analysis of DNA sequences with a great speed. The deposition of massive amounts of bacterial genomic information in databanks is creating fingerprint phylogenetic analysis that will ultimately replace several technologies such as Pulsed Field Gel Electrophoresis. In this chapter, we will review (1) the use of DNA microarray using fluorescence and infrared imaging detection for identification of pathogenic bacteria, and (2) use of pyrosequencing in DNA cluster analysis to fingerprint bacterial phylogenetic trees.

  12. An Efficient Ensemble Learning Method for Gene Microarray Classification

    Directory of Open Access Journals (Sweden)

    Alireza Osareh

    2013-01-01

    Full Text Available The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.

  13. Effectiveness of a theoretically-based judgment and decision making intervention for adolescents.

    Science.gov (United States)

    Knight, Danica K; Dansereau, Donald F; Becan, Jennifer E; Rowan, Grace A; Flynn, Patrick M

    2015-05-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37 % female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one's own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors.

  14. Microarrays: Molecular allergology and nanotechnology for personalised medicine (II).

    Science.gov (United States)

    Lucas, J M

    2010-01-01

    Progress in nanotechnology and DNA recombination techniques have produced tools for the diagnosis and investigation of allergy at molecular level. The most advanced examples of such progress are the microarray techniques, which have been expanded not only in research in the field of proteomics but also in application to the clinical setting. Microarrays of allergic components offer results relating to hundreds of allergenic components in a single test, and using a small amount of serum which can be obtained from capillary blood. The availability of new molecules will allow the development of panels including new allergenic components and sources, which will require evaluation for clinical use. Their application opens the door to component-based diagnosis, to the holistic perception of sensitisation as represented by molecular allergy, and to patient-centred medical practice by allowing great diagnostic accuracy and the definition of individualised immunotherapy for each patient. The present article reviews the application of allergenic component microarrays to allergology for diagnosis, management in the form of specific immunotherapy, and epidemiological studies. A review is also made of the use of protein and gene microarray techniques in basic research and in allergological diseases. Lastly, an evaluation is made of the challenges we face in introducing such techniques to clinical practice, and of the future perspectives of this new technology. Copyright 2010 SEICAP. Published by Elsevier Espana. All rights reserved.

  15. Fabrication of protein microarrays for alpha fetoprotein detection by using a rapid photo-immobilization process

    Directory of Open Access Journals (Sweden)

    Sirasa Yodmongkol

    2016-03-01

    Full Text Available In this study, protein microarrays based on sandwich immunoassays are generated to quantify the amount of alpha fetoprotein (AFP in blood serum. For chip generation a mixture of capture antibody and a photoactive copolymer consisting of N,N-dimethylacrylamide (DMAA, methacryloyloxy benzophenone (MaBP, and Na-4-styrenesulfonate (SSNa was spotted onto unmodified polymethyl methacrylate (PMMA substrates. Subsequently to printing of the microarray, the polymer and protein were photochemically cross-linked and the forming, biofunctionalized hydrogels simultaneously bound to the chip surface by short UV- irradiation. The obtained biochip was incubated with AFP antigen, followed by biotinylated AFP antibody and streptavidin-Cy5 and the fluorescence signal read-out. The developed microarray biochip covers the range of AFP in serum samples such as maternal serum in the range of 5 and 100 ng/ml. The chip production process is based on a fast and simple immobilization process, which can be applied to conventional plastic surfaces. Therefore, this protein microarray production process is a promising method to fabricate biochips for AFP screening processes. Keywords: Photo-immobilization, Protein microarray, Alpha fetoprotein, Hydrogel, 3D surface, Down syndrome

  16. Ethical analysis to improve decision-making on health technologies.

    Science.gov (United States)

    Saarni, Samuli I; Hofmann, Bjørn; Lampe, Kristian; Lühmann, Dagmar; Mäkelä, Marjukka; Velasco-Garrido, Marcial; Autti-Rämö, Ilona

    2008-08-01

    Health technology assessment (HTA) is the multidisciplinary study of the implications of the development, diffusion and use of health technologies. It supports health-policy decisions by providing a joint knowledge base for decision-makers. To increase its policy relevance, HTA tries to extend beyond effectiveness and costs to also considering the social, organizational and ethical implications of technologies. However, a commonly accepted method for analysing the ethical aspects of health technologies is lacking. This paper describes a model for ethical analysis of health technology that is easy and flexible to use in different organizational settings and cultures. The model is part of the EUnetHTA project, which focuses on the transferability of HTAs between countries. The EUnetHTA ethics model is based on the insight that the whole HTA process is value laden. It is not sufficient to only analyse the ethical consequences of a technology, but also the ethical issues of the whole HTA process must be considered. Selection of assessment topics, methods and outcomes is essentially a value-laden decision. Health technologies may challenge moral or cultural values and beliefs, and their implementation may also have significant impact on people other than the patient. These are essential considerations for health policy. The ethics model is structured around key ethical questions rather than philosophical theories, to be applicable to different cultures and usable by non-philosophers. Integrating ethical considerations into HTA can improve the relevance of technology assessments for health care and health policy in both developed and developing countries.

  17. Discovering biological progression underlying microarray samples.

    Directory of Open Access Journals (Sweden)

    Peng Qiu

    2011-04-01

    Full Text Available In biological systems that undergo processes such as differentiation, a clear concept of progression exists. We present a novel computational approach, called Sample Progression Discovery (SPD, to discover patterns of biological progression underlying microarray gene expression data. SPD assumes that individual samples of a microarray dataset are related by an unknown biological process (i.e., differentiation, development, cell cycle, disease progression, and that each sample represents one unknown point along the progression of that process. SPD aims to organize the samples in a manner that reveals the underlying progression and to simultaneously identify subsets of genes that are responsible for that progression. We demonstrate the performance of SPD on a variety of microarray datasets that were generated by sampling a biological process at different points along its progression, without providing SPD any information of the underlying process. When applied to a cell cycle time series microarray dataset, SPD was not provided any prior knowledge of samples' time order or of which genes are cell-cycle regulated, yet SPD recovered the correct time order and identified many genes that have been associated with the cell cycle. When applied to B-cell differentiation data, SPD recovered the correct order of stages of normal B-cell differentiation and the linkage between preB-ALL tumor cells with their cell origin preB. When applied to mouse embryonic stem cell differentiation data, SPD uncovered a landscape of ESC differentiation into various lineages and genes that represent both generic and lineage specific processes. When applied to a prostate cancer microarray dataset, SPD identified gene modules that reflect a progression consistent with disease stages. SPD may be best viewed as a novel tool for synthesizing biological hypotheses because it provides a likely biological progression underlying a microarray dataset and, perhaps more importantly, the

  18. INTERIM REPORT IMPROVED METHODS FOR INCORPORATING RISK IN DECISION MAKING

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, M. J.; Fraley, D. W.; Denning, R. S.

    1980-08-01

    This paper reports observations and preliminary investigations in the first phase of a research program covering methodologies for making safety-related decisions. The objective has been to gain insight into NRC perceptions of the value of formal decision methods, their possible applications, and how risk is, or may be, incorporated in decision making. The perception of formal decision making techniques, held by various decision makers, and what may be done to improve them, were explored through interviews with NRC staff. An initial survey of decision making methods, an assessment of the applicability of formal methods vis-a-vis the available information, and a review of methods of incorporating risk and uncertainty have also been conducted.

  19. Decision-Making Based on Emotional Images

    OpenAIRE

    Katahira, Kentaro; Fujimura, Tomomi; Okanoya, Kazuo; Okada, Masato

    2011-01-01

    The emotional outcome of a choice affects subsequent decision making. While the relationship between decision making and emotion has attracted attention, studies on emotion and decision making have been independently developed. In this study, we investigated how the emotional valence of pictures, which was stochastically contingent on participants’ choices, influenced subsequent decision making. In contrast to traditional value-based decision-making studies that used money or food as a reward...

  20. Decision making based on emotional images

    OpenAIRE

    Kentaro eKatahira; Kentaro eKatahira; Kentaro eKatahira; Tomomi eFujimura; Tomomi eFujimura; Kazuo eOkanoya; Kazuo eOkanoya; Kazuo eOkanoya; Masato eOkada; Masato eOkada; Masato eOkada

    2011-01-01

    The emotional outcome of a choice affects subsequent decision making. While the relationship between decision making and emotion has attracted attention, studies on emotion and decision making have been independently developed. In this study, we investigated how the emotional valence of pictures, which was stochastically contingent on participants’ choices, influenced subsequent decision making. In contrast to traditional value-based decision-making studies that used money or food as a reward...

  1. Principles of gene microarray data analysis.

    Science.gov (United States)

    Mocellin, Simone; Rossi, Carlo Riccardo

    2007-01-01

    The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.

  2. Development of a microarray-based assay for efficient testing of new HSP70/DnaK inhibitors.

    Science.gov (United States)

    Mohammadi-Ostad-Kalayeh, Sona; Hrupins, Vjaceslavs; Helmsen, Sabine; Ahlbrecht, Christin; Stahl, Frank; Scheper, Thomas; Preller, Matthias; Surup, Frank; Stadler, Marc; Kirschning, Andreas; Zeilinger, Carsten

    2017-12-15

    A facile method for testing ATP binding in a highly miniaturized microarray environment using human HSP70 and DnaK from Mycobacterium tuberculosis as biological targets is reported. Supported by molecular modelling studies we demonstrate that the position of the fluorescence label on ATP has a strong influence on the binding to human HSP70. Importantly, the label has to be positioned on the adenine ring and not to the terminal phosphate group. Unlabelled ATP displaced bound Cy5-ATP from HSP70 in the micromolar range. The affinity of a well-known HSP70 inhibitor VER155008 for the ATP binding site in HSP70 was determined, with a EC 50 in the micromolar range, whereas reblastin, a HSP90-inhibitor, did not compete for ATP in the presence of HSP70. The applicability of the method was demonstrated by screening a small compound library of natural products. This unraveled that terphenyls rickenyl A and D, recently isolated from cultures of the fungus Hypoxylon rickii, are inhibitors of HSP70. They compete with ATP for the chaperone in the range of 29 µM (Rickenyl D) and 49 µM (Rickenyl A). Furthermore, the microarray-based test system enabled protein-protein interaction analysis using full-length HSP70 and HSP90 proteins. The labelled full-length human HSP90 binds with a half-maximal affinity of 5.5 µg/ml (∼40 µM) to HSP70. The data also demonstrate that the microarray test has potency for many applications from inhibitor screening to target-oriented interaction studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Decision boxes for clinicians to support evidence-based practice and shared decision making: the user experience

    Directory of Open Access Journals (Sweden)

    Giguere Anik

    2012-08-01

    Full Text Available Abstract Background This project engages patients and physicians in the development of Decision Boxes, short clinical topic summaries covering medical questions that have no single best answer. Decision Boxes aim to prepare the clinician to communicate the risks and benefits of the available options to the patient so they can make an informed decision together. Methods Seven researchers (including four practicing family physicians selected 10 clinical topics relevant to primary care practice through a Delphi survey. We then developed two one-page prototypes on two of these topics: prostate cancer screening with the prostate-specific antigen test, and prenatal screening for trisomy 21 with the serum integrated test. We presented the prototypes to purposeful samples of family physicians distributed in two focus groups, and patients distributed in four focus groups. We used the User Experience Honeycomb to explore barriers and facilitators to the communication design used in Decision Boxes. All discussions were transcribed, and three researchers proceeded to thematic content analysis of the transcriptions. The coding scheme was first developed from the Honeycomb’s seven themes (valuable, usable, credible, useful, desirable, accessible, and findable, and included new themes suggested by the data. Prototypes were modified in light of our findings. Results Three rounds were necessary for a majority of researchers to select 10 clinical topics. Fifteen physicians and 33 patients participated in the focus groups. Following analyses, three sections were added to the Decision Boxes: introduction, patient counseling, and references. The information was spread to two pages to try to make the Decision Boxes less busy and improve users’ first impression. To try to improve credibility, we gave more visibility to the research institutions involved in development. A statement on the boxes’ purpose and a flow chart representing the shared decision

  4. An evaluation of two-channel ChIP-on-chip and DNA methylation microarray normalization strategies

    Science.gov (United States)

    2012-01-01

    Background The combination of chromatin immunoprecipitation with two-channel microarray technology enables genome-wide mapping of binding sites of DNA-interacting proteins (ChIP-on-chip) or sites with methylated CpG di-nucleotides (DNA methylation microarray). These powerful tools are the gateway to understanding gene transcription regulation. Since the goals of such studies, the sample preparation procedures, the microarray content and study design are all different from transcriptomics microarrays, the data pre-processing strategies traditionally applied to transcriptomics microarrays may not be appropriate. Particularly, the main challenge of the normalization of "regulation microarrays" is (i) to make the data of individual microarrays quantitatively comparable and (ii) to keep the signals of the enriched probes, representing DNA sequences from the precipitate, as distinguishable as possible from the signals of the un-enriched probes, representing DNA sequences largely absent from the precipitate. Results We compare several widely used normalization approaches (VSN, LOWESS, quantile, T-quantile, Tukey's biweight scaling, Peng's method) applied to a selection of regulation microarray datasets, ranging from DNA methylation to transcription factor binding and histone modification studies. Through comparison of the data distributions of control probes and gene promoter probes before and after normalization, and assessment of the power to identify known enriched genomic regions after normalization, we demonstrate that there are clear differences in performance between normalization procedures. Conclusion T-quantile normalization applied separately on the channels and Tukey's biweight scaling outperform other methods in terms of the conservation of enriched and un-enriched signal separation, as well as in identification of genomic regions known to be enriched. T-quantile normalization is preferable as it additionally improves comparability between microarrays. In

  5. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  6. A tribute to Charlie Chaplin: Induced positive affect improves reward-based decision-learning in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    K. Richard eRidderinkhof

    2012-06-01

    Full Text Available Reward-based decision-learning refers to the process of learning to select those actions that lead to rewards while avoiding actions that lead to punishments. This process, known to rely on dopaminergic activity in striatal brain regions, is compromised in Parkinson’s disease (PD. We hypothesized that such decision-learning deficits are alleviated by induced positive affect, which is thought to incur transient boosts in midbrain and striatal dopaminergic activity. Computational measures of probabilistic reward-based decision-learning were determined for 51 patients diagnosed with PD. Previous work has shown these measures to rely on the nucleus caudatus (outcome evaluation during the early phases of learning and the putamen (reward prediction during later phases of learning. We observed that induced positive affect facilitated learning, through its effects on reward prediction rather than outcome evaluation. Viewing a few minutes of comedy clips served to remedy dopamine-related problems in putamen-based frontostriatal circuitry and, consequently, in learning to predict which actions will yield reward.

  7. Quantitative multiplex quantum dot in-situ hybridisation based gene expression profiling in tissue microarrays identifies prognostic genes in acute myeloid leukaemia

    Energy Technology Data Exchange (ETDEWEB)

    Tholouli, Eleni [Department of Haematology, Manchester Royal Infirmary, Oxford Road, Manchester, M13 9WL (United Kingdom); MacDermott, Sarah [The Medical School, The University of Manchester, Oxford Road, M13 9PT Manchester (United Kingdom); Hoyland, Judith [School of Biomedicine, Faculty of Medical and Human Sciences, The University of Manchester, Oxford Road, M13 9PT Manchester (United Kingdom); Yin, John Liu [Department of Haematology, Manchester Royal Infirmary, Oxford Road, Manchester, M13 9WL (United Kingdom); Byers, Richard, E-mail: richard.byers@cmft.nhs.uk [School of Cancer and Enabling Sciences, Faculty of Medical and Human Sciences, The University of Manchester, Stopford Building, Oxford Road, M13 9PT Manchester (United Kingdom)

    2012-08-24

    Highlights: Black-Right-Pointing-Pointer Development of a quantitative high throughput in situ expression profiling method. Black-Right-Pointing-Pointer Application to a tissue microarray of 242 AML bone marrow samples. Black-Right-Pointing-Pointer Identification of HOXA4, HOXA9, Meis1 and DNMT3A as prognostic markers in AML. -- Abstract: Measurement and validation of microarray gene signatures in routine clinical samples is problematic and a rate limiting step in translational research. In order to facilitate measurement of microarray identified gene signatures in routine clinical tissue a novel method combining quantum dot based oligonucleotide in situ hybridisation (QD-ISH) and post-hybridisation spectral image analysis was used for multiplex in-situ transcript detection in archival bone marrow trephine samples from patients with acute myeloid leukaemia (AML). Tissue-microarrays were prepared into which white cell pellets were spiked as a standard. Tissue microarrays were made using routinely processed bone marrow trephines from 242 patients with AML. QD-ISH was performed for six candidate prognostic genes using triplex QD-ISH for DNMT1, DNMT3A, DNMT3B, and for HOXA4, HOXA9, Meis1. Scrambled oligonucleotides were used to correct for background staining followed by normalisation of expression against the expression values for the white cell pellet standard. Survival analysis demonstrated that low expression of HOXA4 was associated with poorer overall survival (p = 0.009), whilst high expression of HOXA9 (p < 0.0001), Meis1 (p = 0.005) and DNMT3A (p = 0.04) were associated with early treatment failure. These results demonstrate application of a standardised, quantitative multiplex QD-ISH method for identification of prognostic markers in formalin-fixed paraffin-embedded clinical samples, facilitating measurement of gene expression signatures in routine clinical samples.

  8. Utility of the pooling approach as applied to whole genome association scans with high-density Affymetrix microarrays

    Directory of Open Access Journals (Sweden)

    Gray Joanna

    2010-11-01

    Full Text Available Abstract Background We report an attempt to extend the previously successful approach of combining SNP (single nucleotide polymorphism microarrays and DNA pooling (SNP-MaP employing high-density microarrays. Whereas earlier studies employed a range of Affymetrix SNP microarrays comprising from 10 K to 500 K SNPs, this most recent investigation used the 6.0 chip which displays 906,600 SNP probes and 946,000 probes for the interrogation of CNVs (copy number variations. The genotyping assay using the Affymetrix SNP 6.0 array is highly demanding on sample quality due to the small feature size, low redundancy, and lack of mismatch probes. Findings In the first study published so far using this microarray on pooled DNA, we found that pooled cheek swab DNA could not accurately predict real allele frequencies of the samples that comprised the pools. In contrast, the allele frequency estimates using blood DNA pools were reasonable, although inferior compared to those obtained with previously employed Affymetrix microarrays. However, it might be possible to improve performance by developing improved analysis methods. Conclusions Despite the decreasing costs of genome-wide individual genotyping, the pooling approach may have applications in very large-scale case-control association studies. In such cases, our study suggests that high-quality DNA preparations and lower density platforms should be preferred.

  9. Improving rural electricity system planning: An agent-based model for stakeholder engagement and decision making

    International Nuclear Information System (INIS)

    Alfaro, Jose F.; Miller, Shelie; Johnson, Jeremiah X.; Riolo, Rick R.

    2017-01-01

    Energy planners in regions with low rates of electrification face complex and high-risk challenges in selecting appropriate generating technologies and grid centralization. To better inform such processes, we present an Agent-Based Model (ABM) that facilitates engagement with stakeholders. This approach evaluates long-term plans using the cost of delivered electricity, resource mix, jobs and economic stimulus created within communities, and decentralized generation mix of the system, with results provided in a spatially-resolved format. This approach complements existing electricity planning methods (e.g., Integrated Resource Planning) by offering novel evaluation criteria based on typical stakeholder preferences. We demonstrate the utility of this approach with a case study based on a “blank-slate” scenario, which begins without generation or transmission infrastructure, for the long-term rural renewable energy plans of Liberia, West Africa. We consider five electrification strategies: prioritizing larger populations, deploying large resources, creating jobs, providing economic stimulus, and step-wise cost minimization. Through the case study we demonstrate how this approach can be used to engage stakeholders, supplement more established energy planning tools, and illustrate the effects of stakeholder decisions and preferences on the performance of the system. - Highlights: • An Agent Based Model, BABSTER, for electrification planning is presented. • BABSTER provides a highly engaging spatially resolved interface. • Allows flexible investigation of decision strategies with real-world incentives. • We show that decision strategies directly impact centralization and resource choice. • It is illustrated through the case study of Liberia, West Africa.

  10. SLIMarray: Lightweight software for microarray facility management

    Directory of Open Access Journals (Sweden)

    Marzolf Bruz

    2006-10-01

    Full Text Available Abstract Background Microarray core facilities are commonplace in biological research organizations, and need systems for accurately tracking various logistical aspects of their operation. Although these different needs could be handled separately, an integrated management system provides benefits in organization, automation and reduction in errors. Results We present SLIMarray (System for Lab Information Management of Microarrays, an open source, modular database web application capable of managing microarray inventories, sample processing and usage charges. The software allows modular configuration and is well suited for further development, providing users the flexibility to adapt it to their needs. SLIMarray Lite, a version of the software that is especially easy to install and run, is also available. Conclusion SLIMarray addresses the previously unmet need for free and open source software for managing the logistics of a microarray core facility.

  11. Creation of antifouling microarrays by photopolymerization of zwitterionic compounds for protein assay and cell patterning.

    Science.gov (United States)

    Sun, Xiuhua; Wang, Huaixin; Wang, Yuanyuan; Gui, Taijiang; Wang, Ke; Gao, Changlu

    2018-04-15

    Nonspecific binding or adsorption of biomolecules presents as a major obstacle to higher sensitivity, specificity and reproducibility in microarray technology. We report herein a method to fabricate antifouling microarray via photopolymerization of biomimetic betaine compounds. In brief, carboxybetaine methacrylate was polymerized as arrays for protein sensing, while sulfobetaine methacrylate was polymerized as background. With the abundant carboxyl groups on array surfaces and zwitterionic polymers on the entire surfaces, this microarray allows biomolecular immobilization and recognition with low nonspecific interactions due to its antifouling property. Therefore, low concentration of target molecules can be captured and detected by this microarray. It was proved that a concentration of 10ngmL -1 bovine serum albumin in the sample matrix of bovine serum can be detected by the microarray derivatized with anti-bovine serum albumin. Moreover, with proper hydrophilic-hydrophobic designs, this approach can be applied to fabricate surface-tension droplet arrays, which allows surface-directed cell adhesion and growth. These light controllable approaches constitute a clear improvement in the design of antifouling interfaces, which may lead to greater flexibility in the development of interfacial architectures and wider application in blood contact microdevices. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Rationality versus reality: the challenges of evidence-based decision making for health policy makers.

    Science.gov (United States)

    McCaughey, Deirdre; Bruning, Nealia S

    2010-05-26

    Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be

  13. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    Directory of Open Access Journals (Sweden)

    Bruning Nealia S

    2010-05-01

    Full Text Available Abstract Background Current healthcare systems have extended the evidence-based medicine (EBM approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM and evidence-based policy making (EBPM because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial

  14. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    Science.gov (United States)

    2010-01-01

    Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the

  15. Risk-based emergency decision support

    International Nuclear Information System (INIS)

    Koerte, Jens

    2003-01-01

    In the present paper we discuss how to assist critical decisions taken under complex, contingent circumstances, with a high degree of uncertainty and short time frames. In such sharp-end decision regimes, standard rule-based decision support systems do not capture the complexity of the situation. At the same time, traditional risk analysis is of little use due to variability in the specific circumstances. How then, can an organisation provide assistance to, e.g. pilots in dealing with such emergencies? A method called 'contingent risk and decision analysis' is presented, to provide decision support for decisions under variable circumstances and short available time scales. The method consists of nine steps of definition, modelling, analysis and criteria definition to be performed 'off-line' by analysts, and procedure generation to transform the analysis result into an operational decision aid. Examples of pilots' decisions in response to sudden vibration in offshore helicopter transport method are used to illustrate the approach

  16. Metric learning for DNA microarray data analysis

    International Nuclear Information System (INIS)

    Takeuchi, Ichiro; Nakagawa, Masao; Seto, Masao

    2009-01-01

    In many microarray studies, gene set selection is an important preliminary step for subsequent main task such as tumor classification, cancer subtype identification, etc. In this paper, we investigate the possibility of using metric learning as an alternative to gene set selection. We develop a simple metric learning algorithm aiming to use it for microarray data analysis. Exploiting a property of the algorithm, we introduce a novel approach for extending the metric learning to be adaptive. We apply the algorithm to previously studied microarray data on malignant lymphoma subtype identification.

  17. SSHscreen and SSHdb, generic software for microarray based gene discovery: application to the stress response in cowpea

    Directory of Open Access Journals (Sweden)

    Oelofse Dean

    2010-04-01

    redundant clones together and illustrated that the SSHscreen plots are a useful tool for choosing anonymous clones for sequencing, since redundant clones cluster together on the enrichment ratio plots. Conclusions We developed the SSHscreen-SSHdb software pipeline, which greatly facilitates gene discovery using suppression subtractive hybridization by improving the selection of clones for sequencing after screening the library on a small number of microarrays. Annotation of the sequence information and collaboration was further enhanced through a web-based SSHdb database, and we illustrated this through identification of drought responsive genes from cowpea, which can now be investigated in gene function studies. SSH is a popular and powerful gene discovery tool, and therefore this pipeline will have application for gene discovery in any biological system, particularly non-model organisms. SSHscreen 2.0.1 and a link to SSHdb are available from http://microarray.up.ac.za/SSHscreen.

  18. onlineDeCISion.org: a web-based decision aid for DCIS treatment.

    Science.gov (United States)

    Ozanne, Elissa M; Schneider, Katharine H; Soeteman, Djøra; Stout, Natasha; Schrag, Deborah; Fordis, Michael; Punglia, Rinaa S

    2015-11-01

    Women diagnosed with DCIS face complex treatment decisions and often do so with inaccurate and incomplete understanding of the risks and benefits involved. Our objective was to create a tool to guide these decisions for both providers and patients. We developed a web-based decision aid designed to provide clinicians with tailored information about a patient’s recurrence risks and survival outcomes following different treatment strategies for DCIS. A theoretical framework, microsimulation model (Soeteman et al., J Natl Cancer 105:774–781, 2013) and best practices for web-based decision tools guided the development of the decision aid. The development process used semi-structured interviews and usability testing with key stakeholders, including a diverse group of multidisciplinary clinicians and a patient advocate. We developed onlineDeCISion.​org to include the following features that were rated as important by the stakeholders: (1) descriptions of each of the standard treatment options available; (2) visual projections of the likelihood of time-specific (10-year and lifetime) breast-preservation, recurrence, and survival outcomes; and (3) side-by-side comparisons of down-stream effects of each treatment choice. All clinicians reviewing the decision aid in usability testing were interested in using it in their clinical practice. The decision aid is available in a web-based format and is planned to be publicly available. To improve treatment decision making in patients with DCIS, we have developed a web-based decision aid onlineDeCISion.​org that conforms to best practices and that clinicians are interested in using in their clinics with patients to better inform treatment decisions.

  19. Using fuzzy-trace theory to understand and improve health judgments, decisions, and behaviors: A literature review.

    Science.gov (United States)

    Blalock, Susan J; Reyna, Valerie F

    2016-08-01

    Fuzzy-trace theory is a dual-process model of memory, reasoning, judgment, and decision making that contrasts with traditional expectancy-value approaches. We review the literature applying fuzzy-trace theory to health with 3 aims: evaluating whether the theory's basic distinctions have been validated empirically in the domain of health; determining whether these distinctions are useful in assessing, explaining, and predicting health-related psychological processes; and determining whether the theory can be used to improve health judgments, decisions, or behaviors, especially compared to other approaches. We conducted a literature review using PubMed, PsycINFO, and Web of Science to identify empirical peer-reviewed papers that applied fuzzy-trace theory, or central constructs of the theory, to investigate health judgments, decisions, or behaviors. Seventy nine studies (updated total is 94 studies; see Supplemental materials) were identified, over half published since 2012, spanning a wide variety of conditions and populations. Study findings supported the prediction that verbatim and gist representations are distinct constructs that can be retrieved independently using different cues. Although gist-based reasoning was usually associated with improved judgment and decision making, 4 sources of bias that can impair gist reasoning were identified. Finally, promising findings were reported from intervention studies that used fuzzy-trace theory to improve decision making and decrease unhealthy risk taking. Despite large gaps in the literature, most studies supported all 3 aims. By focusing on basic psychological processes that underlie judgment and decision making, fuzzy-trace theory provides insights into how individuals make decisions involving health risks and suggests innovative intervention approaches to improve health outcomes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. EFFECT OF A TRAINING PROGRAM ON THE IMPROVEMENT OF BASKETBALL PLAYERS' DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Francisco Alarc\\u00F3n

    2009-01-01

    Full Text Available The aim of this study was to analyse whether a tactical training program based on a constructivist model can improve decision making related to keeping control of the ball for a men's senior basketball team composed of ten players. The dependent variables were: player distribution around the ball, and achieving support on both sides of the ball at an effective passing distance. Data collection was made through observational analysis utilizing a previously validated tool. A pretest-posttest design without a control group was used. Results demonstrated an improvement in decision making after the posttest for both the number of support players near the player with the ball, as it increased from 85% in the pretest to 100% in the posttest, and the number of collective or team actions around the player with the ball (from 5% to 76.5% with highly significant differences. The primary conclusion is that a training program for teaching team tactics based on a constructivist model has a positive influence on players' capability to facilitate the pass to their teammates.

  1. Microarray-Based Gene Expression Analysis for Veterinary Pathologists: A Review.

    Science.gov (United States)

    Raddatz, Barbara B; Spitzbarth, Ingo; Matheis, Katja A; Kalkuhl, Arno; Deschl, Ulrich; Baumgärtner, Wolfgang; Ulrich, Reiner

    2017-09-01

    High-throughput, genome-wide transcriptome analysis is now commonly used in all fields of life science research and is on the cusp of medical and veterinary diagnostic application. Transcriptomic methods such as microarrays and next-generation sequencing generate enormous amounts of data. The pathogenetic expertise acquired from understanding of general pathology provides veterinary pathologists with a profound background, which is essential in translating transcriptomic data into meaningful biological knowledge, thereby leading to a better understanding of underlying disease mechanisms. The scientific literature concerning high-throughput data-mining techniques usually addresses mathematicians or computer scientists as the target audience. In contrast, the present review provides the reader with a clear and systematic basis from a veterinary pathologist's perspective. Therefore, the aims are (1) to introduce the reader to the necessary methodological background; (2) to introduce the sequential steps commonly performed in a microarray analysis including quality control, annotation, normalization, selection of differentially expressed genes, clustering, gene ontology and pathway analysis, analysis of manually selected genes, and biomarker discovery; and (3) to provide references to publically available and user-friendly software suites. In summary, the data analysis methods presented within this review will enable veterinary pathologists to analyze high-throughput transcriptome data obtained from their own experiments, supplemental data that accompany scientific publications, or public repositories in order to obtain a more in-depth insight into underlying disease mechanisms.

  2. Investigation of parameters that affect the success rate of microarray-based allele-specific hybridization assays.

    Directory of Open Access Journals (Sweden)

    Lena Poulsen

    Full Text Available BACKGROUND: The development of microarray-based genetic tests for diseases that are caused by known mutations is becoming increasingly important. The key obstacle to developing functional genotyping assays is that such mutations need to be genotyped regardless of their location in genomic regions. These regions include large variations in G+C content, and structural features like hairpins. METHODS/FINDINGS: We describe a rational, stable method for screening and combining assay conditions for the genetic analysis of 42 Phenylketonuria-associated mutations in the phenylalanine hydroxylase gene. The mutations are located in regions with large variations in G+C content (20-75%. Custom-made microarrays with different lengths of complementary probe sequences and spacers were hybridized with pooled PCR products of 12 exons from each of 38 individual patient DNA samples. The arrays were washed with eight buffers with different stringencies in a custom-made microfluidic system. The data were used to assess which parameters play significant roles in assay development. CONCLUSIONS: Several assay development methods found suitable probes and assay conditions for a functional test for all investigated mutation sites. Probe length, probe spacer length, and assay stringency sufficed as variable parameters in the search for a functional multiplex assay. We discuss the optimal assay development methods for several different scenarios.

  3. A Decision Support Framework for Science-Based, Multi-Stakeholder Deliberation: A Coral Reef Example

    Science.gov (United States)

    Rehr, Amanda P.; Small, Mitchell J.; Bradley, Patricia; Fisher, William S.; Vega, Ann; Black, Kelly; Stockton, Tom

    2012-12-01

    We present a decision support framework for science-based assessment and multi-stakeholder deliberation. The framework consists of two parts: a DPSIR (Drivers-Pressures-States-Impacts-Responses) analysis to identify the important causal relationships among anthropogenic environmental stressors, processes, and outcomes; and a Decision Landscape analysis to depict the legal, social, and institutional dimensions of environmental decisions. The Decision Landscape incorporates interactions among government agencies, regulated businesses, non-government organizations, and other stakeholders. It also identifies where scientific information regarding environmental processes is collected and transmitted to improve knowledge about elements of the DPSIR and to improve the scientific basis for decisions. Our application of the decision support framework to coral reef protection and restoration in the Florida Keys focusing on anthropogenic stressors, such as wastewater, proved to be successful and offered several insights. Using information from a management plan, it was possible to capture the current state of the science with a DPSIR analysis as well as important decision options, decision makers and applicable laws with a the Decision Landscape analysis. A structured elicitation of values and beliefs conducted at a coral reef management workshop held in Key West, Florida provided a diversity of opinion and also indicated a prioritization of several environmental stressors affecting coral reef health. The integrated DPSIR/Decision landscape framework for the Florida Keys developed based on the elicited opinion and the DPSIR analysis can be used to inform management decisions, to reveal the role that further scientific information and research might play to populate the framework, and to facilitate better-informed agreement among participants.

  4. Probe Selection for DNA Microarrays using OligoWiz

    DEFF Research Database (Denmark)

    Wernersson, Rasmus; Juncker, Agnieszka; Nielsen, Henrik Bjørn

    2007-01-01

    Nucleotide abundance measurements using DNA microarray technology are possible only if appropriate probes complementary to the target nucleotides can be identified. Here we present a protocol for selecting DNA probes for microarrays using the OligoWiz application. OligoWiz is a client-server appl......Nucleotide abundance measurements using DNA microarray technology are possible only if appropriate probes complementary to the target nucleotides can be identified. Here we present a protocol for selecting DNA probes for microarrays using the OligoWiz application. OligoWiz is a client......-server application that offers a detailed graphical interface and real-time user interaction on the client side, and massive computer power and a large collection of species databases (400, summer 2007) on the server side. Probes are selected according to five weighted scores: cross-hybridization, deltaT(m), folding...... computer skills and can be executed from any Internet-connected computer. The probe selection procedure for a standard microarray design targeting all yeast transcripts can be completed in 1 h....

  5. Microarray-based IgE detection in tears of patients with vernal keratoconjunctivitis.

    Science.gov (United States)

    Leonardi, Andrea; Borghesan, Franco; Faggian, Diego; Plebani, Mario

    2015-11-01

    A specific allergen sensitization can be demonstrated in approximately half of the vernal keratoconjunctivitis (VKC) patients by conventional allergic tests. The measurement of specific IgE in tears using a multiplex allergen microarray may offer advantages to identify local sensitization to a specific allergen. In spring-summer 2011, serum and tears samples were collected from 10 active VKC patients (three females, seven males) and 10 age-matched normal subjects. Skin prick test, symptoms score and full ophthalmological examination were performed. Specific serum and tear IgE were assayed using ImmunoCAP ISAC, a microarray containing 103 components derived from 47 allergens. Normal subjects resulted negative for the presence of specific IgE both in serum and in tears. Of the 10 VKC patients, six resulted positive to specific IgE in serum and/or tears. In three of these six patients, specific IgE was found positive only in tears. Cross-reactivity between specific markers was found in three patients. Grass, tree, mites, animal but also food allergen-specific IgE were found in tears. Conjunctival provocation test performed out of season confirmed the specific local conjunctival reactivity. Multiple specific IgE measurements with single protein allergens using a microarray technique in tear samples are a useful, simple and non-invasive diagnostic tool. ImmunoCAP ISAC detects allergen sensitization at component level and adds important information by defining both cross- and co-sensitization to a large variety of allergen molecules. The presence of specific IgE only in tears of VKC patients reinforces the concept of possible local sensitization. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Analysis of Temporal-spatial Co-variation within Gene Expression Microarray Data in an Organogenesis Model

    Science.gov (United States)

    Ehler, Martin; Rajapakse, Vinodh; Zeeberg, Barry; Brooks, Brian; Brown, Jacob; Czaja, Wojciech; Bonner, Robert F.

    The gene networks underlying closure of the optic fissure during vertebrate eye development are poorly understood. We used a novel clustering method based on Laplacian Eigenmaps, a nonlinear dimension reduction method, to analyze microarray data from laser capture microdissected (LCM) cells at the site and developmental stages (days 10.5 to 12.5) of optic fissure closure. Our new method provided greater biological specificity than classical clustering algorithms in terms of identifying more biological processes and functions related to eye development as defined by Gene Ontology at lower false discovery rates. This new methodology builds on the advantages of LCM to isolate pure phenotypic populations within complex tissues and allows improved ability to identify critical gene products expressed at lower copy number. The combination of LCM of embryonic organs, gene expression microarrays, and extracting spatial and temporal co-variations appear to be a powerful approach to understanding the gene regulatory networks that specify mammalian organogenesis.

  7. Ontology based decision system for breast cancer diagnosis

    Science.gov (United States)

    Trabelsi Ben Ameur, Soumaya; Cloppet, Florence; Wendling, Laurent; Sellami, Dorra

    2018-04-01

    In this paper, we focus on analysis and diagnosis of breast masses inspired by expert concepts and rules. Accordingly, a Bag of Words is built based on the ontology of breast cancer diagnosis, accurately described in the Breast Imaging Reporting and Data System. To fill the gap between low level knowledge and expert concepts, a semantic annotation is developed using a machine learning tool. Then, breast masses are classified into benign or malignant according to expert rules implicitly modeled with a set of classifiers (KNN, ANN, SVM and Decision Tree). This semantic context of analysis offers a frame where we can include external factors and other meta-knowledge such as patient risk factors as well as exploiting more than one modality. Based on MRI and DECEDM modalities, our developed system leads a recognition rate of 99.7% with Decision Tree where an improvement of 24.7 % is obtained owing to semantic analysis.

  8. Barriers to and Facilitators of Evidence-Based Decision Making at the Point of Care

    Directory of Open Access Journals (Sweden)

    Ann S. O’Malley MD, MPH

    2016-07-01

    Full Text Available Introduction: Physicians vary widely in how they treat some health conditions, despite strong evidence favoring certain treatments over others. We examined physicians’ perspectives on factors that support or hinder evidence-based decisions and the implications for delivery systems, payers, and policymakers. Methods: We used Choosing Wisely ® recommendations to create four clinical vignettes for common types of decisions. We conducted semi-structured interviews with 36 specialists to identify factors that support or hinder evidence-based decisions. We examined these factors using a conceptual framework that includes six levels: patients, physicians, practice sites, organizations, networks and hospital affiliations, and the local market. In this model, population characteristics and payer and regulatory factors interact to influence decisions. Results: Patient openness to behavior modification and expectations, facilitated and hindered physicians in making evidence-based recommendations. Physicians’ communication skills were the most commonly mentioned facilitator. Practice site, organization, and hospital system barriers included measures of emergency department throughput, the order in which test options are listed in electronic health records (EHR, lack of relevant decision support in EHRs, and payment incentives that maximize billing and encourage procedures rather than medical management or counseling patients on behavior change. Factors from different levels interacted to undermine evidence-based care. Most physicians received billing feedback, but quality metrics on evidence-based service use were nonexistent for the four decisions in this study. Conclusions and Implications: Additional research and quality improvement may help to modify delivery systems to overcome barriers at multiple levels. Enhancing provider communication skills, improving decision support in EHRs, modifying workflows, and refining the design and interpretation of

  9. Reasoning in explanation-based decision making.

    Science.gov (United States)

    Pennington, N; Hastie, R

    1993-01-01

    A general theory of explanation-based decision making is outlined and the multiple roles of inference processes in the theory are indicated. A typology of formal and informal inference forms, originally proposed by Collins (1978a, 1978b), is introduced as an appropriate framework to represent inferences that occur in the overarching explanation-based process. Results from the analysis of verbal reports of decision processes are presented to demonstrate the centrality and systematic character of reasoning in a representative legal decision-making task.

  10. Calling Biomarkers in Milk Using a Protein Microarray on Your Smartphone

    Science.gov (United States)

    Ludwig, Susann K. J.; Tokarski, Christian; Lang, Stefan N.; van Ginkel, Leendert A.; Zhu, Hongying; Ozcan, Aydogan; Nielen, Michel W. F.

    2015-01-01

    Here we present the concept of a protein microarray-based fluorescence immunoassay for multiple biomarker detection in milk extracts by an ordinary smartphone. A multiplex immunoassay was designed on a microarray chip, having built-in positive and negative quality controls. After the immunoassay procedure, the 48 microspots were labelled with Quantum Dots (QD) depending on the protein biomarker levels in the sample. QD-fluorescence was subsequently detected by the smartphone camera under UV light excitation from LEDs embedded in a simple 3D-printed opto-mechanical smartphone attachment. The somewhat aberrant images obtained under such conditions, were corrected by newly developed Android-based software on the same smartphone, and protein biomarker profiles were calculated. The indirect detection of recombinant bovine somatotropin (rbST) in milk extracts based on altered biomarker profile of anti-rbST antibodies was selected as a real-life challenge. RbST-treated and untreated cows clearly showed reproducible treatment-dependent biomarker profiles in milk, in excellent agreement with results from a flow cytometer reference method. In a pilot experiment, anti-rbST antibody detection was multiplexed with the detection of another rbST-dependent biomarker, insulin-like growth factor 1 (IGF-1). Milk extract IGF-1 levels were found to be increased after rbST treatment and correlated with the results obtained from the reference method. These data clearly demonstrate the potential of the portable protein microarray concept towards simultaneous detection of multiple biomarkers. We envisage broad application of this ‘protein microarray on a smartphone’-concept for on-site testing, e.g., in food safety, environment and health monitoring. PMID:26308444

  11. Calling Biomarkers in Milk Using a Protein Microarray on Your Smartphone.

    Directory of Open Access Journals (Sweden)

    Susann K J Ludwig

    Full Text Available Here we present the concept of a protein microarray-based fluorescence immunoassay for multiple biomarker detection in milk extracts by an ordinary smartphone. A multiplex immunoassay was designed on a microarray chip, having built-in positive and negative quality controls. After the immunoassay procedure, the 48 microspots were labelled with Quantum Dots (QD depending on the protein biomarker levels in the sample. QD-fluorescence was subsequently detected by the smartphone camera under UV light excitation from LEDs embedded in a simple 3D-printed opto-mechanical smartphone attachment. The somewhat aberrant images obtained under such conditions, were corrected by newly developed Android-based software on the same smartphone, and protein biomarker profiles were calculated. The indirect detection of recombinant bovine somatotropin (rbST in milk extracts based on altered biomarker profile of anti-rbST antibodies was selected as a real-life challenge. RbST-treated and untreated cows clearly showed reproducible treatment-dependent biomarker profiles in milk, in excellent agreement with results from a flow cytometer reference method. In a pilot experiment, anti-rbST antibody detection was multiplexed with the detection of another rbST-dependent biomarker, insulin-like growth factor 1 (IGF-1. Milk extract IGF-1 levels were found to be increased after rbST treatment and correlated with the results obtained from the reference method. These data clearly demonstrate the potential of the portable protein microarray concept towards simultaneous detection of multiple biomarkers. We envisage broad application of this 'protein microarray on a smartphone'-concept for on-site testing, e.g., in food safety, environment and health monitoring.

  12. Particle swarm optimization of driving torque demand decision based on fuel economy for plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Shen, Peihong; Zhao, Zhiguo; Zhan, Xiaowen; Li, Jingwei

    2017-01-01

    In this paper, an energy management strategy based on logic threshold is proposed for a plug-in hybrid electric vehicle. The plug-in hybrid electric vehicle powertrain model is established using MATLAB/Simulink based on experimental tests of the power components, which is validated by the comparison with the verified simulation model which is built in the AVL Cruise. The influence of the driving torque demand decision on the fuel economy of plug-in hybrid electric vehicle is studied using a simulation. The optimization method for the driving torque demand decision, which refers to the relationship between the accelerator pedal opening and driving torque demand, from the perspective of fuel economy is formulated. The dynamically changing inertia weight particle swarm optimization is used to optimize the decision parameters. The simulation results show that the optimized driving torque demand decision can improve the PHEV fuel economy by 15.8% and 14.5% in the fuel economy test driving cycle of new European driving cycle and worldwide harmonized light vehicles test respectively, using the same rule-based energy management strategy. The proposed optimization method provides a theoretical guide for calibrating the parameters of driving torque demand decision to improve the fuel economy of the real plug-in hybrid electric vehicle. - Highlights: • The influence of the driving torque demand decision on the fuel economy is studied. • The optimization method for the driving torque demand decision is formulated. • An improved particle swarm optimization is utilized to optimize the parameters. • Fuel economy is improved by using the optimized driving torque demand decision.

  13. Fabrication of Biomolecule Microarrays for Cell Immobilization Using Automated Microcontact Printing.

    Science.gov (United States)

    Foncy, Julie; Estève, Aurore; Degache, Amélie; Colin, Camille; Cau, Jean Christophe; Malaquin, Laurent; Vieu, Christophe; Trévisiol, Emmanuelle

    2018-01-01

    Biomolecule microarrays are generally produced by conventional microarrayer, i.e., by contact or inkjet printing. Microcontact printing represents an alternative way of deposition of biomolecules on solid supports but even if various biomolecules have been successfully microcontact printed, the production of biomolecule microarrays in routine by microcontact printing remains a challenging task and needs an effective, fast, robust, and low-cost automation process. Here, we describe the production of biomolecule microarrays composed of extracellular matrix protein for the fabrication of cell microarrays by using an automated microcontact printing device. Large scale cell microarrays can be reproducibly obtained by this method.

  14. CUDT: A CUDA Based Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Win-Tsung Lo

    2014-01-01

    Full Text Available Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture, which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  15. [Evidence-based Risk and Benefit Communication for Shared Decision Making].

    Science.gov (United States)

    Nakayama, Takeo

    2018-01-01

     Evidence-based medicine (EBM) can be defined as "the integration of the best research evidence with clinical expertise and a patient's unique values and circumstances". However, even with the best research evidence, many uncertainties can make clinical decisions difficult. As the social requirement of respecting patient values and preferences has been increasingly recognized, shared decision making (SDM) and consensus development between patients and clinicians have attracted attention. SDM is a process by which patients and clinicians make decisions and arrive at a consensus through interactive conversations and communications. During the process of SDM, patients and clinicians share information with each other on the goals they hope to achieve and responsibilities in meeting those goals. From the clinician's standpoint, information regarding the benefits and risks of potential treatment options based on current evidence and professional experience is provided to patients. From the patient's standpoint, information on personal values, preferences, and social roles is provided to clinicians. SDM is a sort of "wisdom" in the context of making autonomous decisions in uncertain, difficult situations through interactions and cooperation between patients and clinicians. Joint development of EBM and SDM will help facilitate patient-clinician relationships and improve the quality of healthcare.

  16. Exploring the use of internal and externalcontrols for assessing microarray technical performance

    Directory of Open Access Journals (Sweden)

    Game Laurence

    2010-12-01

    Full Text Available Abstract Background The maturing of gene expression microarray technology and interest in the use of microarray-based applications for clinical and diagnostic applications calls for quantitative measures of quality. This manuscript presents a retrospective study characterizing several approaches to assess technical performance of microarray data measured on the Affymetrix GeneChip platform, including whole-array metrics and information from a standard mixture of external spike-in and endogenous internal controls. Spike-in controls were found to carry the same information about technical performance as whole-array metrics and endogenous "housekeeping" genes. These results support the use of spike-in controls as general tools for performance assessment across time, experimenters and array batches, suggesting that they have potential for comparison of microarray data generated across species using different technologies. Results A layered PCA modeling methodology that uses data from a number of classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes" was used for the assessment of microarray data quality. The controls provide information on multiple stages of the experimental protocol (e.g., hybridization, RNA amplification. External spike-in, hybridization and RNA labeling controls provide information related to both assay and hybridization performance whereas internal endogenous controls provide quality information on the biological sample. We find that the variance of the data generated from the external and internal controls carries critical information about technical performance; the PCA dissection of this variance is consistent with whole-array quality assessment based on a number of quality assurance/quality control (QA/QC metrics. Conclusions These results provide support for the use of both external and internal RNA control data to assess the technical quality of microarray

  17. BIOPHYSICAL PROPERTIES OF NUCLEIC ACIDS AT SURFACES RELEVANT TO MICROARRAY PERFORMANCE.

    Science.gov (United States)

    Rao, Archana N; Grainger, David W

    2014-04-01

    Both clinical and analytical metrics produced by microarray-based assay technology have recognized problems in reproducibility, reliability and analytical sensitivity. These issues are often attributed to poor understanding and control of nucleic acid behaviors and properties at solid-liquid interfaces. Nucleic acid hybridization, central to DNA and RNA microarray formats, depends on the properties and behaviors of single strand (ss) nucleic acids (e.g., probe oligomeric DNA) bound to surfaces. ssDNA's persistence length, radius of gyration, electrostatics, conformations on different surfaces and under various assay conditions, its chain flexibility and curvature, charging effects in ionic solutions, and fluorescent labeling all influence its physical chemistry and hybridization under assay conditions. Nucleic acid (e.g., both RNA and DNA) target interactions with immobilized ssDNA strands are highly impacted by these biophysical states. Furthermore, the kinetics, thermodynamics, and enthalpic and entropic contributions to DNA hybridization reflect global probe/target structures and interaction dynamics. Here we review several biophysical issues relevant to oligomeric nucleic acid molecular behaviors at surfaces and their influences on duplex formation that influence microarray assay performance. Correlation of biophysical aspects of single and double-stranded nucleic acids with their complexes in bulk solution is common. Such analysis at surfaces is not commonly reported, despite its importance to microarray assays. We seek to provide further insight into nucleic acid-surface challenges facing microarray diagnostic formats that have hindered their clinical adoption and compromise their research quality and value as genomics tools.

  18. BIOPHYSICAL PROPERTIES OF NUCLEIC ACIDS AT SURFACES RELEVANT TO MICROARRAY PERFORMANCE

    Science.gov (United States)

    Rao, Archana N.; Grainger, David W.

    2014-01-01

    Both clinical and analytical metrics produced by microarray-based assay technology have recognized problems in reproducibility, reliability and analytical sensitivity. These issues are often attributed to poor understanding and control of nucleic acid behaviors and properties at solid-liquid interfaces. Nucleic acid hybridization, central to DNA and RNA microarray formats, depends on the properties and behaviors of single strand (ss) nucleic acids (e.g., probe oligomeric DNA) bound to surfaces. ssDNA’s persistence length, radius of gyration, electrostatics, conformations on different surfaces and under various assay conditions, its chain flexibility and curvature, charging effects in ionic solutions, and fluorescent labeling all influence its physical chemistry and hybridization under assay conditions. Nucleic acid (e.g., both RNA and DNA) target interactions with immobilized ssDNA strands are highly impacted by these biophysical states. Furthermore, the kinetics, thermodynamics, and enthalpic and entropic contributions to DNA hybridization reflect global probe/target structures and interaction dynamics. Here we review several biophysical issues relevant to oligomeric nucleic acid molecular behaviors at surfaces and their influences on duplex formation that influence microarray assay performance. Correlation of biophysical aspects of single and double-stranded nucleic acids with their complexes in bulk solution is common. Such analysis at surfaces is not commonly reported, despite its importance to microarray assays. We seek to provide further insight into nucleic acid-surface challenges facing microarray diagnostic formats that have hindered their clinical adoption and compromise their research quality and value as genomics tools. PMID:24765522

  19. On the classification techniques in data mining for microarray data classification

    Science.gov (United States)

    Aydadenta, Husna; Adiwijaya

    2018-03-01

    Cancer is one of the deadly diseases, according to data from WHO by 2015 there are 8.8 million more deaths caused by cancer, and this will increase every year if not resolved earlier. Microarray data has become one of the most popular cancer-identification studies in the field of health, since microarray data can be used to look at levels of gene expression in certain cell samples that serve to analyze thousands of genes simultaneously. By using data mining technique, we can classify the sample of microarray data thus it can be identified with cancer or not. In this paper we will discuss some research using some data mining techniques using microarray data, such as Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5, and simulation of Random Forest algorithm with technique of reduction dimension using Relief. The result of this paper show performance measure (accuracy) from classification algorithm (SVM, ANN, Naive Bayes, kNN, C4.5, and Random Forets).The results in this paper show the accuracy of Random Forest algorithm higher than other classification algorithms (Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5). It is hoped that this paper can provide some information about the speed, accuracy, performance and computational cost generated from each Data Mining Classification Technique based on microarray data.

  20. Decision-making based on emotional images.

    Science.gov (United States)

    Katahira, Kentaro; Fujimura, Tomomi; Okanoya, Kazuo; Okada, Masato

    2011-01-01

    The emotional outcome of a choice affects subsequent decision making. While the relationship between decision making and emotion has attracted attention, studies on emotion and decision making have been independently developed. In this study, we investigated how the emotional valence of pictures, which was stochastically contingent on participants' choices, influenced subsequent decision making. In contrast to traditional value-based decision-making studies that used money or food as a reward, the "reward value" of the decision outcome, which guided the update of value for each choice, is unknown beforehand. To estimate the reward value of emotional pictures from participants' choice data, we used reinforcement learning models that have successfully been used in previous studies for modeling value-based decision making. Consequently, we found that the estimated reward value was asymmetric between positive and negative pictures. The negative reward value of negative pictures (relative to neutral pictures) was larger in magnitude than the positive reward value of positive pictures. This asymmetry was not observed in valence for an individual picture, which was rated by the participants regarding the emotion experienced upon viewing it. These results suggest that there may be a difference between experienced emotion and the effect of the experienced emotion on subsequent behavior. Our experimental and computational paradigm provides a novel way for quantifying how and what aspects of emotional events affect human behavior. The present study is a first step toward relating a large amount of knowledge in emotion science and in taking computational approaches to value-based decision making.

  1. Decision making based on emotional images

    Directory of Open Access Journals (Sweden)

    Kentaro eKatahira

    2011-10-01

    Full Text Available The emotional outcome of a choice affects subsequent decision making. While the relationship between decision making and emotion has attracted attention, studies on emotion and decision making have been independently developed. In this study, we investigated how the emotional valence of pictures, which was stochastically contingent on participants’ choices, influenced subsequent decision making. In contrast to traditional value-based decision-making studies that used money or food as a reward, the reward value of the decision outcome, which guided the update of value for each choice, is unknown beforehand. To estimate the reward value of emotional pictures from participants’ choice data, we used reinforcement learning models that have success- fully been used in previous studies for modeling value-based decision making. Consequently, we found that the estimated reward value was asymmetric between positive and negative pictures. The negative reward value of negative pictures (relative to neutral pictures was larger in magnitude than the positive reward value of positive pictures. This asymmetry was not observed in valence for an individual picture, which was rated by the participants regarding the emotion experienced upon viewing it. These results suggest that there may be a difference between experienced emotion and the effect of the experienced emotion on subsequent behavior. Our experimental and computational paradigm provides a novel way for quantifying how and what aspects of emotional events affect human behavior. The present study is a first step toward relating a large amount of knowledge in emotion science and in taking computational approaches to value-based decision making.

  2. MAGMA: analysis of two-channel microarrays made easy.

    Science.gov (United States)

    Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph

    2007-07-01

    The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.

  3. Health decision making: lynchpin of evidence-based practice.

    Science.gov (United States)

    Spring, Bonnie

    2008-01-01

    Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers' intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed.

  4. Do educational interventions improve nurses' clinical decision making and judgement? A systematic review.

    Science.gov (United States)

    Thompson, Carl; Stapley, Sally

    2011-07-01

    Despite the growing popularity of decision making in nursing curricula, the effectiveness of educational interventions to improve nursing judgement and decision making is unknown. We sought to synthesise and summarise the comparative evidence for educational interventions to improve nursing judgements and clinical decisions. A systematic review. Electronic databases: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL and PsycINFO, Social Sciences Citation Index, OpenSIGLE conference proceedings and hand searching nursing journals. Studies published since 1960, reporting any educational intervention that aimed to improve nurses' clinical judgements or decision making were included. Studies were assessed for relevance and quality. Data extracted included study design; educational setting; the nature of participants; whether the study was concerned with the clinical application of skills or the application of theory; the type of decision targeted by the intervention (e.g. diagnostic reasoning) and whether the evaluation of the intervention focused on efficacy or effectiveness. A narrative approach to study synthesis was used due to heterogeneity in interventions, study samples, outcomes and settings and incomplete reporting of effect sizes. From 5262 initial citations 24 studies were included in the review. A variety of educational approaches were reported. Study quality and content reporting was generally poor. Pedagogical theories were widely used but use of decision theory (with the exception of subjective expected utility theory implicit in decision analysis) was rare. The effectiveness and efficacy of interventions was mixed. Educational interventions to improve nurses' judgements and decisions are complex and the evidence from comparative studies does little to reduce the uncertainty about 'what works'. Nurse educators need to pay attention to decision, as well as pedagogical, theory in the design of interventions. Study design and

  5. Visual Analysis of DNA Microarray Data for Accurate Molecular Identification of Non-albicans Candida Isolates from Patients with Candidemia Episodes

    OpenAIRE

    De Luca Ferrari, Michela; Ribeiro Resende, Mariângela; Sakai, Kanae; Muraosa, Yasunori; Lyra, Luzia; Gonoi, Tohru; Mikami, Yuzuru; Tominaga, Kenichiro; Kamei, Katsuhiko; Zaninelli Schreiber, Angelica; Trabasso, Plinio; Moretti, Maria Luiza

    2013-01-01

    The performance of a visual slide-based DNA microarray for the identification of non-albicans Candida spp. was evaluated. Among 167 isolates that had previously been identified by Vitek 2, the agreement between DNA microarray and sequencing results was 97.6%. This DNA microarray platform showed excellent performance.

  6. Comparing microarrays and next-generation sequencing technologies for microbial ecology research.

    Science.gov (United States)

    Roh, Seong Woon; Abell, Guy C J; Kim, Kyoung-Ho; Nam, Young-Do; Bae, Jin-Woo

    2010-06-01

    Recent advances in molecular biology have resulted in the application of DNA microarrays and next-generation sequencing (NGS) technologies to the field of microbial ecology. This review aims to examine the strengths and weaknesses of each of the methodologies, including depth and ease of analysis, throughput and cost-effectiveness. It also intends to highlight the optimal application of each of the individual technologies toward the study of a particular environment and identify potential synergies between the two main technologies, whereby both sample number and coverage can be maximized. We suggest that the efficient use of microarray and NGS technologies will allow researchers to advance the field of microbial ecology, and importantly, improve our understanding of the role of microorganisms in their various environments.

  7. Parallel constraint satisfaction in memory-based decisions.

    Science.gov (United States)

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  8. A decision support system improves the interpretation of myocardial perfusion imaging

    DEFF Research Database (Denmark)

    Tagil, K.; Bondouy, M.; Chaborel, J.P.

    2008-01-01

    PURPOSE: The aim of this study was to investigate the influence of a computer-based decision support system (DSS) on performance and inter-observer variability of interpretations regarding ischaemia and infarction in myocardial perfusion scintigraphy (MPS). METHODS: Seven physicians independently...... with the advice of the DSS showed less inter-observer variability than those made without advice. CONCLUSION: This study shows that a DSS can improve performance and reduces the inter-observer variability of interpretations in myocardial perfusion imaging. Both experienced and, especially, inexperienced...

  9. Effects of Video-Based Visual Training on Decision-Making and Reactive Agility in Adolescent Football Players

    Directory of Open Access Journals (Sweden)

    Alfred Nimmerichter

    2015-12-01

    Full Text Available This study investigated the trainability of decision-making and reactive agility via video-based visual training in young athletes. Thirty-four members of a national football academy (age: 14.4 ± 0.1 years were randomly assigned to a training (VIS; n = 18 or a control group (CON; n = 16. In addition to the football training, the VIS completed a video-based visual training twice a week over a period of six weeks during the competition phase. Using the temporal occlusion technique, the players were instructed to react on one-on-one situations shown in 40 videos. The number of successful decisions and the response time were measured with a video-based test. In addition, the reactive-agility sprint test was used. VIS significantly improved the number of successful decisions (22.2 ± 3.6 s vs. 29.8 ± 4.5 s; p < 0.001, response time (0.41 ± 0.10 s vs. 0.31 ± 0.10 s; p = 0.006 and reactive agility (2.22 ± 0.33 s vs. 1.94 ± 0.11 s; p = 0.001 pre- vs. post-training. No significant differences were found for CON. The results have shown that video-based visual training improves the time to make decisions as well as reactive agility sprint-time, accompanied by an increase in successful decisions. It remains to be shown whether or not such training can improve simulated or actual game performance.

  10. A Lateral Flow Protein Microarray for Rapid and Sensitive Antibody Assays

    Directory of Open Access Journals (Sweden)

    Helene Andersson-Svahn

    2011-11-01

    Full Text Available Protein microarrays are useful tools for highly multiplexed determination of presence or levels of clinically relevant biomarkers in human tissues and biofluids. However, such tools have thus far been restricted to laboratory environments. Here, we present a novel 384-plexed easy to use lateral flow protein microarray device capable of sensitive (< 30 ng/mL determination of antigen-specific antibodies in ten minutes of total assay time. Results were developed with gold nanobeads and could be recorded by a cell-phone camera or table top scanner. Excellent accuracy with an area under curve (AUC of 98% was achieved in comparison with an established glass microarray assay for 26 antigen-specific antibodies. We propose that the presented framework could find use in convenient and cost-efficient quality control of antibody production, as well as in providing a platform for multiplexed affinity-based assays in low-resource or mobile settings.

  11. Managing health care decisions and improvement through simulation modeling.

    Science.gov (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan

    2011-01-01

    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  12. Microarray-based analysis of differential gene expression between infective and noninfective larvae of Strongyloides stercoralis.

    Directory of Open Access Journals (Sweden)

    Roshan Ramanathan

    2011-05-01

    Full Text Available Differences between noninfective first-stage (L1 and infective third-stage (L3i larvae of parasitic nematode Strongyloides stercoralis at the molecular level are relatively uncharacterized. DNA microarrays were developed and utilized for this purpose.Oligonucleotide hybridization probes for the array were designed to bind 3,571 putative mRNA transcripts predicted by analysis of 11,335 expressed sequence tags (ESTs obtained as part of the Nematode EST project. RNA obtained from S. stercoralis L3i and L1 was co-hybridized to each array after labeling the individual samples with different fluorescent tags. Bioinformatic predictions of gene function were developed using a novel cDNA Annotation System software. We identified 935 differentially expressed genes (469 L3i-biased; 466 L1-biased having two-fold expression differences or greater and microarray signals with a p value<0.01. Based on a functional analysis, L1 larvae have a larger number of genes putatively involved in transcription (p = 0.004, and L3i larvae have biased expression of putative heat shock proteins (such as hsp-90. Genes with products known to be immunoreactive in S. stercoralis-infected humans (such as SsIR and NIE had L3i biased expression. Abundantly expressed L3i contigs of interest included S. stercoralis orthologs of cytochrome oxidase ucr 2.1 and hsp-90, which may be potential chemotherapeutic targets. The S. stercoralis ortholog of fatty acid and retinol binding protein-1, successfully used in a vaccine against Ancylostoma ceylanicum, was identified among the 25 most highly expressed L3i genes. The sperm-containing glycoprotein domain, utilized in a vaccine against the nematode Cooperia punctata, was exclusively found in L3i biased genes and may be a valuable S. stercoralis target of interest.A new DNA microarray tool for the examination of S. stercoralis biology has been developed and provides new and valuable insights regarding differences between infective and

  13. Clonidine improved laboratory-measured decision-making performance in abstinent heroin addicts.

    Directory of Open Access Journals (Sweden)

    Xiao-Li Zhang

    Full Text Available BACKGROUND: Impulsivity refers to a wide spectrum of actions characterized by quick and nonplanned reactions to external and internal stimuli, without taking into account the possible negative consequences for the individual or others, and decision-making is one of the biologically dissociated impulsive behaviors. Changes in impulsivity may be associated with norepinephrine. Various populations of drug addicts all performed impulsive decision making, which is a key risk factor in drug dependence and relapse. The present study investigated the effects of clonidine, which decreased norepinephrine release through presynaptic alpha-2 receptor activation, on the impaired decision-making performance in abstinent heroin addicts. METHODOLOGY/PRINCIPAL FINDINGS: Decision-making performance was assessed using the original version of Iowa Gambling Task (IGT. Both heroin addicts and normal controls were randomly assigned to three groups receiving clonidine, 0, 75 µg or 150 µg orally under double blind conditions. Psychiatric symptoms, including anxiety, depression and impulsivity, were rated on standardized scales. Heroin addicts reported higher scores on the Barratt Impulsiveness Scale and exhibited impaired decision-making on the IGT. A single high-dose of clonidine improved the decision-making performance in heroin addicts. CONCLUSIONS/SIGNIFICANCE: Our results suggest clonidine may have a potential therapeutic role in heroin addicts by improving the impaired impulsive decision-making. The current findings have important implications for behavioral and pharmacological interventions targeting decision-making in heroin addiction.

  14. Effort-Based Decision-Making in Schizophrenia.

    Science.gov (United States)

    Culbreth, Adam J; Moran, Erin K; Barch, Deanna M

    2018-08-01

    Motivational impairment has long been associated with schizophrenia but the underlying mechanisms are not clearly understood. Recently, a small but growing literature has suggested that aberrant effort-based decision-making may be a potential contributory mechanism for motivational impairments in psychosis. Specifically, multiple reports have consistently demonstrated that individuals with schizophrenia are less willing than healthy controls to expend effort to obtain rewards. Further, this effort-based decision-making deficit has been shown to correlate with severity of negative symptoms and level of functioning, in many but not all studies. In the current review, we summarize this literature and discuss several factors that may underlie aberrant effort-based decision-making in schizophrenia.

  15. Data-Based Decision Making at the Policy, Research, and Practice Levels

    NARCIS (Netherlands)

    Schildkamp, Kim; Ebbeler, J.

    2015-01-01

    Data-based decision making (DBDM) can lead to school improvement. However, schools struggle with the implementation of DBDM. In this symposium, we will discuss research and the implementation of DBDM at the national and regional policy level and the classroom level. We will discuss policy issues

  16. The use of microarrays in microbial ecology

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, G.L.; He, Z.; DeSantis, T.Z.; Brodie, E.L.; Zhou, J.

    2009-09-15

    Microarrays have proven to be a useful and high-throughput method to provide targeted DNA sequence information for up to many thousands of specific genetic regions in a single test. A microarray consists of multiple DNA oligonucleotide probes that, under high stringency conditions, hybridize only to specific complementary nucleic acid sequences (targets). A fluorescent signal indicates the presence and, in many cases, the abundance of genetic regions of interest. In this chapter we will look at how microarrays are used in microbial ecology, especially with the recent increase in microbial community DNA sequence data. Of particular interest to microbial ecologists, phylogenetic microarrays are used for the analysis of phylotypes in a community and functional gene arrays are used for the analysis of functional genes, and, by inference, phylotypes in environmental samples. A phylogenetic microarray that has been developed by the Andersen laboratory, the PhyloChip, will be discussed as an example of a microarray that targets the known diversity within the 16S rRNA gene to determine microbial community composition. Using multiple, confirmatory probes to increase the confidence of detection and a mismatch probe for every perfect match probe to minimize the effect of cross-hybridization by non-target regions, the PhyloChip is able to simultaneously identify any of thousands of taxa present in an environmental sample. The PhyloChip is shown to reveal greater diversity within a community than rRNA gene sequencing due to the placement of the entire gene product on the microarray compared with the analysis of up to thousands of individual molecules by traditional sequencing methods. A functional gene array that has been developed by the Zhou laboratory, the GeoChip, will be discussed as an example of a microarray that dynamically identifies functional activities of multiple members within a community. The recent version of GeoChip contains more than 24,000 50mer

  17. Tissue microarray immunohistochemical detection of brachyury is not a prognostic indicator in chordoma.

    Science.gov (United States)

    Zhang, Linlin; Guo, Shang; Schwab, Joseph H; Nielsen, G Petur; Choy, Edwin; Ye, Shunan; Zhang, Zhan; Mankin, Henry; Hornicek, Francis J; Duan, Zhenfeng

    2013-01-01

    Brachyury is a marker for notochord-derived tissues and neoplasms, such as chordoma. However, the prognostic relevance of brachyury expression in chordoma is still unknown. The improvement of tissue microarray technology has provided the opportunity to perform analyses of tumor tissues on a large scale in a uniform and consistent manner. This study was designed with the use of tissue microarray to determine the expression of brachyury. Brachyury expression in chordoma tissues from 78 chordoma patients was analyzed by immunohistochemical staining of tissue microarray. The clinicopathologic parameters, including gender, age, location of tumor and metastatic status were evaluated. Fifty-nine of 78 (75.64%) tumors showed nuclear staining for brachyury, and among them, 29 tumors (49.15%) showed 1+ (mobile spine. However, there was no significant relationship between brachyury expression and other clinical variables. By Kaplan-Meier analysis, brachyury expression failed to produce any significant relationship with the overall survival rate. In conclusion, brachyury expression is not a prognostic indicator in chordoma.

  18. Validating evidence based decision making in health care

    DEFF Research Database (Denmark)

    Nüssler, Emil Karl; Eskildsen, Jacob Kjær; Håkonsson, Dorthe Døjbak

    Surgeons who perform prolapse surgeries face the dilemma of choosing to use mesh, with its assumed benefits, and the risks associated with mesh. In this paper, we examine whether decisions to use mesh is evidence based. Based on data of 30,398 patients from the Swedish National Quality Register o...... are highly influenced by the geographical placement of surgeons. Therfore, decisions to use mesh are boundedly rationality, rather than rational.......Surgeons who perform prolapse surgeries face the dilemma of choosing to use mesh, with its assumed benefits, and the risks associated with mesh. In this paper, we examine whether decisions to use mesh is evidence based. Based on data of 30,398 patients from the Swedish National Quality Register...... of Gynecological Surgery we examine factors related to decisions to use mesh. Our results indicate that decisions to use mesh are not evidence based, and cannot be explained neither by FDA safety communications, nor by medical conditions usually assumed to predict its usage. Instead, decisions to use mesh...

  19. Graph Based Study of Allergen Cross-Reactivity of Plant Lipid Transfer Proteins (LTPs) Using Microarray in a Multicenter Study

    Science.gov (United States)

    Palacín, Arantxa; Gómez-Casado, Cristina; Rivas, Luis A.; Aguirre, Jacobo; Tordesillas, Leticia; Bartra, Joan; Blanco, Carlos; Carrillo, Teresa; Cuesta-Herranz, Javier; de Frutos, Consolación; Álvarez-Eire, Genoveva García; Fernández, Francisco J.; Gamboa, Pedro; Muñoz, Rosa; Sánchez-Monge, Rosa; Sirvent, Sofía; Torres, María J.; Varela-Losada, Susana; Rodríguez, Rosalía; Parro, Victor; Blanca, Miguel; Salcedo, Gabriel; Díaz-Perales, Araceli

    2012-01-01

    The study of cross-reactivity in allergy is key to both understanding. the allergic response of many patients and providing them with a rational treatment In the present study, protein microarrays and a co-sensitization graph approach were used in conjunction with an allergen microarray immunoassay. This enabled us to include a wide number of proteins and a large number of patients, and to study sensitization profiles among members of the LTP family. Fourteen LTPs from the most frequent plant food-induced allergies in the geographical area studied were printed into a microarray specifically designed for this research. 212 patients with fruit allergy and 117 food-tolerant pollen allergic subjects were recruited from seven regions of Spain with different pollen profiles, and their sera were tested with allergen microarray. This approach has proven itself to be a good tool to study cross-reactivity between members of LTP family, and could become a useful strategy to analyze other families of allergens. PMID:23272072

  20. Graph based study of allergen cross-reactivity of plant lipid transfer proteins (LTPs using microarray in a multicenter study.

    Directory of Open Access Journals (Sweden)

    Arantxa Palacín

    Full Text Available The study of cross-reactivity in allergy is key to both understanding. the allergic response of many patients and providing them with a rational treatment In the present study, protein microarrays and a co-sensitization graph approach were used in conjunction with an allergen microarray immunoassay. This enabled us to include a wide number of proteins and a large number of patients, and to study sensitization profiles among members of the LTP family. Fourteen LTPs from the most frequent plant food-induced allergies in the geographical area studied were printed into a microarray specifically designed for this research. 212 patients with fruit allergy and 117 food-tolerant pollen allergic subjects were recruited from seven regions of Spain with different pollen profiles, and their sera were tested with allergen microarray. This approach has proven itself to be a good tool to study cross-reactivity between members of LTP family, and could become a useful strategy to analyze other families of allergens.

  1. A multicriteria decision making approach based on fuzzy theory and credibility mechanism for logistics center location selection.

    Science.gov (United States)

    Wang, Bowen; Xiong, Haitao; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center.

  2. Computational biology of genome expression and regulation--a review of microarray bioinformatics.

    Science.gov (United States)

    Wang, Junbai

    2008-01-01

    Microarray technology is being used widely in various biomedical research areas; the corresponding microarray data analysis is an essential step toward the best utilizing of array technologies. Here we review two components of the microarray data analysis: a low level of microarray data analysis that emphasizes the designing, the quality control, and the preprocessing of microarray experiments, then a high level of microarray data analysis that focuses on the domain-specific microarray applications such as tumor classification, biomarker prediction, analyzing array CGH experiments, and reverse engineering of gene expression networks. Additionally, we will review the recent development of building a predictive model in genome expression and regulation studies. This review may help biologists grasp a basic knowledge of microarray bioinformatics as well as its potential impact on the future evolvement of biomedical research fields.

  3. Expression microarray reproducibility is improved by optimising purification steps in RNA amplification and labelling

    Directory of Open Access Journals (Sweden)

    Brenton James D

    2004-01-01

    Full Text Available Abstract Background Expression microarrays have evolved into a powerful tool with great potential for clinical application and therefore reliability of data is essential. RNA amplification is used when the amount of starting material is scarce, as is frequently the case with clinical samples. Purification steps are critical in RNA amplification and labelling protocols, and there is a lack of sufficient data to validate and optimise the process. Results Here the purification steps involved in the protocol for indirect labelling of amplified RNA are evaluated and the experimentally determined best method for each step with respect to yield, purity, size distribution of the transcripts, and dye coupling is used to generate targets tested in replicate hybridisations. DNase treatment of diluted total RNA samples followed by phenol extraction is the optimal way to remove genomic DNA contamination. Purification of double-stranded cDNA is best achieved by phenol extraction followed by isopropanol precipitation at room temperature. Extraction with guanidinium-phenol and Lithium Chloride precipitation are the optimal methods for purification of amplified RNA and labelled aRNA respectively. Conclusion This protocol provides targets that generate highly reproducible microarray data with good representation of transcripts across the size spectrum and a coefficient of repeatability significantly better than that reported previously.

  4. The Research of Clinical Decision Support System Based on Three-Layer Knowledge Base Model

    Directory of Open Access Journals (Sweden)

    Yicheng Jiang

    2017-01-01

    Full Text Available In many clinical decision support systems, a two-layer knowledge base model (disease-symptom of rule reasoning is used. This model often does not express knowledge very well since it simply infers disease from the presence of certain symptoms. In this study, we propose a three-layer knowledge base model (disease-symptom-property to utilize more useful information in inference. The system iteratively calculates the probability of patients who may suffer from diseases based on a multisymptom naive Bayes algorithm, in which the specificity of these disease symptoms is weighted by the estimation of the degree of contribution to diagnose the disease. It significantly reduces the dependencies between attributes to apply the naive Bayes algorithm more properly. Then, the online learning process for parameter optimization of the inference engine was completed. At last, our decision support system utilizing the three-layer model was formally evaluated by two experienced doctors. By comparisons between prediction results and clinical results, our system can provide effective clinical recommendations to doctors. Moreover, we found that the three-layer model can improve the accuracy of predictions compared with the two-layer model. In light of some of the limitations of this study, we also identify and discuss several areas that need continued improvement.

  5. A comparative analysis of DNA barcode microarray feature size

    Directory of Open Access Journals (Sweden)

    Smith Andrew M

    2009-10-01

    Full Text Available Abstract Background Microarrays are an invaluable tool in many modern genomic studies. It is generally perceived that decreasing the size of microarray features leads to arrays with higher resolution (due to greater feature density, but this increase in resolution can compromise sensitivity. Results We demonstrate that barcode microarrays with smaller features are equally capable of detecting variation in DNA barcode intensity when compared to larger feature sizes within a specific microarray platform. The barcodes used in this study are the well-characterized set derived from the Yeast KnockOut (YKO collection used for screens of pooled yeast (Saccharomyces cerevisiae deletion mutants. We treated these pools with the glycosylation inhibitor tunicamycin as a test compound. Three generations of barcode microarrays at 30, 8 and 5 μm features sizes independently identified the primary target of tunicamycin to be ALG7. Conclusion We show that the data obtained with 5 μm feature size is of comparable quality to the 30 μm size and propose that further shrinking of features could yield barcode microarrays with equal or greater resolving power and, more importantly, higher density.

  6. Preparation of Biomolecule Microstructures and Microarrays by Thiol–ene Photoimmobilization

    NARCIS (Netherlands)

    Weinrich, Dirk; Köhn, Maja; Jonkheijm, Pascal; Westerlind, Ulrika; Dehmelt, Leif; Engelkamp, Hans; Christianen, Peter C.M.; Kuhlmann, Jürgen; Maan, Jan C.; Nüsse, Dirk; Schröder, Hendrik; Wacker, Ron; Voges, Edgar; Breinbauer, Rolf; Kunz, Horst; Niemeyer, Christof M.; Waldmann, Herbert

    2010-01-01

    A mild, fast and flexible method for photoimmobilization of biomolecules based on the light-initiated thiol–ene reaction has been developed. After investigation and optimization of various surface materials, surface chemistries and reaction parameters, microstructures and microarrays of biotin,

  7. Protein-protein interactions: an application of Tus-Ter mediated protein microarray system.

    Science.gov (United States)

    Sitaraman, Kalavathy; Chatterjee, Deb K

    2011-01-01

    In this chapter, we present a novel, cost-effective microarray strategy that utilizes expression-ready plasmid DNAs to generate protein arrays on-demand and its use to validate protein-protein interactions. These expression plasmids were constructed in such a way so as to serve a dual purpose of synthesizing the protein of interest as well as capturing the synthesized protein. The microarray system is based on the high affinity binding of Escherichia coli "Tus" protein to "Ter," a 20 bp DNA sequence involved in the regulation of DNA replication. The protein expression is carried out in a cell-free protein synthesis system, with rabbit reticulocyte lysates, and the target proteins are detected either by labeled incorporated tag specific or by gene-specific antibodies. This microarray system has been successfully used for the detection of protein-protein interaction because both the target protein and the query protein can be transcribed and translated simultaneously in the microarray slides. The utility of this system for detecting protein-protein interaction is demonstrated by a few well-known examples: Jun/Fos, FRB/FKBP12, p53/MDM2, and CDK4/p16. In all these cases, the presence of protein complexes resulted in the localization of fluorophores at the specific sites of the immobilized target plasmids. Interestingly, during our interactions studies we also detected a previously unknown interaction between CDK2 and p16. Thus, this Tus-Ter based system of protein microarray can be used for the validation of known protein interactions as well as for identifying new protein-protein interactions. In addition, it can be used to examine and identify targets of nucleic acid-protein, ligand-receptor, enzyme-substrate, and drug-protein interactions.

  8. SOFTWARE PROCESS ASSESSMENT AND IMPROVEMENT USING MULTICRITERIA DECISION AIDING - CONSTRUCTIVIST

    Directory of Open Access Journals (Sweden)

    Leonardo Ensslin

    2012-12-01

    Full Text Available Software process improvement and software process assessment have received special attention since the 1980s. Some models have been created, but these models rest on a normative approach, where the decision-maker’s participation in a software organization is limited to understanding which process is more relevant to each organization. The proposal of this work is to present the MCDA-C as a constructivist methodology for software process improvement and assessment. The methodology makes it possible to visualize the criteria that must be taken into account according to the decision-makers’ values in the process improvement actions, making it possible to rank actions in the light of specific organizational needs. This process helped the manager of the company studied to focus on and prioritize process improvement actions. This paper offers an empirical understanding of the application of performance evaluation to software process improvement and identifies complementary tools to the normative models presented today.

  9. Goober: A fully integrated and user-friendly microarray data management and analysis solution for core labs and bench biologists

    Directory of Open Access Journals (Sweden)

    Luo Wen

    2009-03-01

    Full Text Available Despite the large number of software tools developed to address different areas of microarray data analysis, very few offer an all-in-one solution with little learning curve. For microarray core labs, there are even fewer software packages available to help with their routine but critical tasks, such as data quality control (QC and inventory management. We have developed a simple-to-use web portal to allow bench biologists to analyze and query complicated microarray data and related biological pathways without prior training. Both experiment-based and gene-based analysis can be easily performed, even for the first-time user, through the intuitive multi-layer design and interactive graphic links. While being friendly to inexperienced users, most parameters in Goober can be easily adjusted via drop-down menus to allow advanced users to tailor their needs and perform more complicated analysis. Moreover, we have integrated graphic pathway analysis into the website to help users examine microarray data within the relevant biological content. Goober also contains features that cover most of the common tasks in microarray core labs, such as real time array QC, data loading, array usage and inventory tracking. Overall, Goober is a complete microarray solution to help biologists instantly discover valuable information from a microarray experiment and enhance the quality and productivity of microarray core labs. The whole package is freely available at http://sourceforge.net/projects/goober. A demo web server is available at http://www.goober-array.org.

  10. Improvment, extension and integration of operational decision support systems for nuclear emergency management (DSSNET)

    International Nuclear Information System (INIS)

    Ehrhardt, J.

    2005-07-01

    The DSSNET network was established in October 2000 with the overall objective to create an effective and accepted framework for better communication and understanding between the community of institutions involved in operational off-site emergency management and the many and diverse RTD institutes further developing methods and tools in this area, in particular decision support systems (DSS), for making well informed and consistent judgements with respect to practical improvements of emergency response in Europe. 37 institutions from 21 countries of East and West Europe have been members of the network with about half of them responsible for operational emergency management. The objectives of the network have been numerous and the more important ones include: to ensure that future RTD is more responsive to user needs, to inform the user community of new developments and their potential for improving emergency response, to improve operational decision support systems from feedback of operational experience, to identify how information and data exchange between countries can be improved, to promote greater coherence among operational decision support systems and to encourage shared development of new and improved decision support systems features, and to improve the practicability of operational decision support systems. To stimulate the communication and feedback between the operational and the RTD community, problem-oriented emergency exercises were performed, which covered the various time phases of an accident and which extended from the near range to farther distances with frontier crossing transport of radionuclides. The report describes the objectives of the DSSNET, the five emergency exercises performed and the results of their evaluation. They provided valuable insight and lessons for operators and users of decision support systems, in particular the need for much more intensive training and exercising with decision support systems and their interaction with

  11. Evaluating clean energy alternatives for Jiangsu, China: An improved multi-criteria decision making method

    International Nuclear Information System (INIS)

    Zhang, Ling; Zhou, Peng; Newton, Sidney; Fang, Jian-xin; Zhou, De-qun; Zhang, Lu-ping

    2015-01-01

    Promoting the utilization of clean energy has been identified as one potential solution to addressing environmental pollution and achieving sustainable development in many countries around the world. Evaluating clean energy alternatives includes a requirement to balance multiple conflict criteria, including technology, environment, economy and society, all of which are incommensurate and interdependent. Traditional MCDM (multi-criteria decision making) methods, such as the weighted average method, often fail to aggregate such criteria consistently. In this paper, an improved MCDM method based on fuzzy measure and integral is developed and applied to evaluate four primary clean energy options for Jiangsu Province, China. The results confirm that the preferred clean energy option for Jiangsu is solar photovoltaic, followed by wind, biomass and finally nuclear. A sensitivity analysis is also conducted to evaluate the values of clean energy resources for Jiangsu. The ordered weighted average method is also applied to compare the method mentioned above in our empirical study. The results show that the improved MCDM method provides higher discrimination between alternative clean energy alternatives. - Highlights: • Interactions among evaluation criteria of clean energy resources are taken into account. • An improved multi-criteria decision making (MCDM) method is proposed based on entropy weight method, fuzzy measure and integral. • Clean energy resources of Jiangsu are evaluated with the improved MCDM method, and their ranks are identified.

  12. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  13. Risk-based decision analysis for groundwater operable units

    International Nuclear Information System (INIS)

    Chiaramonte, G.R.

    1995-01-01

    This document proposes a streamlined approach and methodology for performing risk assessment in support of interim remedial measure (IRM) decisions involving the remediation of contaminated groundwater on the Hanford Site. This methodology, referred to as ''risk-based decision analysis,'' also supports the specification of target cleanup volumes and provides a basis for design and operation of the groundwater remedies. The risk-based decision analysis can be completed within a short time frame and concisely documented. The risk-based decision analysis is more versatile than the qualitative risk assessment (QRA), because it not only supports the need for IRMs, but also provides criteria for defining the success of the IRMs and provides the risk-basis for decisions on final remedies. For these reasons, it is proposed that, for groundwater operable units, the risk-based decision analysis should replace the more elaborate, costly, and time-consuming QRA

  14. Improved Lower Mekong River Basin Hydrological Decision Making Using NASA Satellite-based Earth Observation Systems

    Science.gov (United States)

    Bolten, J. D.; Mohammed, I. N.; Srinivasan, R.; Lakshmi, V.

    2017-12-01

    Better understanding of the hydrological cycle of the Lower Mekong River Basin (LMRB) and addressing the value-added information of using remote sensing data on the spatial variability of soil moisture over the Mekong Basin is the objective of this work. In this work, we present the development and assessment of the LMRB (drainage area of 495,000 km2) Soil and Water Assessment Tool (SWAT). The coupled model framework presented is part of SERVIR, a joint capacity building venture between NASA and the U.S. Agency for International Development, providing state-of-the-art, satellite-based earth monitoring, imaging and mapping data, geospatial information, predictive models, and science applications to improve environmental decision-making among multiple developing nations. The developed LMRB SWAT model enables the integration of satellite-based daily gridded precipitation, air temperature, digital elevation model, soil texture, and land cover and land use data to drive SWAT model simulations over the Lower Mekong River Basin. The LMRB SWAT model driven by remote sensing climate data was calibrated and verified with observed runoff data at the watershed outlet as well as at multiple sites along the main river course. Another LMRB SWAT model set driven by in-situ climate observations was also calibrated and verified to streamflow data. Simulated soil moisture estimates from the two models were then examined and compared to a downscaled Soil Moisture Active Passive Sensor (SMAP) 36 km radiometer products. Results from this work present a framework for improving SWAT performance by utilizing a downscaled SMAP soil moisture products used for model calibration and validation. Index Terms: 1622: Earth system modeling; 1631: Land/atmosphere interactions; 1800: Hydrology; 1836 Hydrological cycles and budgets; 1840 Hydrometeorology; 1855: Remote sensing; 1866: Soil moisture; 6334: Regional Planning

  15. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  16. Improving life cycle assessment methodology for the application of decision support

    DEFF Research Database (Denmark)

    Herrmann, Ivan Tengbjerg

    for the application of decision support and evaluation of uncertainty in LCA. From a decision maker’s (DM’s) point of view there are at least three main “illness” factors influencing the quality of the information that the DM uses for making decisions. The factors are not independent of each other, but it seems......) refrain from making a decision based on an LCA and thus support a decision on other parameters than the LCA environmental parameters. Conversely, it may in some decision support contexts be acceptable to base a decision on highly uncertain information. This all depends on the specific decision support...... the different steps. A deterioration of the quality in each step is likely to accumulate through the statistical value chain in terms of increased uncertainty and bias. Ultimately this can make final decision support problematic. The "Law of large numbers" (LLN) is the methodological tool/probability theory...

  17. Detecting Outlier Microarray Arrays by Correlation and Percentage of Outliers Spots

    Directory of Open Access Journals (Sweden)

    Song Yang

    2006-01-01

    Full Text Available We developed a quality assurance (QA tool, namely microarray outlier filter (MOF, and have applied it to our microarray datasets for the identification of problematic arrays. Our approach is based on the comparison of the arrays using the correlation coefficient and the number of outlier spots generated on each array to reveal outlier arrays. For a human universal reference (HUR dataset, which is used as a technical control in our standard hybridization procedure, 3 outlier arrays were identified out of 35 experiments. For a human blood dataset, 12 outlier arrays were identified from 185 experiments. In general, arrays from human blood samples displayed greater variation in their gene expression profiles than arrays from HUR samples. As a result, MOF identified two distinct patterns in the occurrence of outlier arrays. These results demonstrate that this methodology is a valuable QA practice to identify questionable microarray data prior to downstream analysis.

  18. Evaluation of an expanded microarray for detecting antibiotic resistance genes in a broad range of gram-negative bacterial pathogens.

    Science.gov (United States)

    Card, Roderick; Zhang, Jiancheng; Das, Priya; Cook, Charlotte; Woodford, Neil; Anjum, Muna F

    2013-01-01

    A microarray capable of detecting genes for resistance to 75 clinically relevant antibiotics encompassing 19 different antimicrobial classes was tested on 132 Gram-negative bacteria. Microarray-positive results correlated >91% with antimicrobial resistance phenotypes, assessed using British Society for Antimicrobial Chemotherapy clinical breakpoints; the overall test specificity was >83%. Microarray-positive results without a corresponding resistance phenotype matched 94% with PCR results, indicating accurate detection of genes present in the respective bacteria by microarray when expression was low or absent and, hence, undetectable by susceptibility testing. The low sensitivity and negative predictive values of the microarray results for identifying resistance to some antimicrobial resistance classes are likely due to the limited number of resistance genes present on the current microarray for those antimicrobial agents or to mutation-based resistance mechanisms. With regular updates, this microarray can be used for clinical diagnostics to help accurate therapeutic options to be taken following infection with multiple-antibiotic-resistant Gram-negative bacteria and prevent treatment failure.

  19. Research-based-decision-making in Canadian health organizations: a behavioural approach.

    Science.gov (United States)

    Jbilou, Jalila; Amara, Nabil; Landry, Réjean

    2007-06-01

    Decision making in Health sector is affected by a several elements such as economic constraints, political agendas, epidemiologic events, managers' values and environment... These competing elements create a complex environment for decision making. Research-Based-Decision-Making (RBDM) offers an opportunity to reduce the generated uncertainty and to ensure efficacy and efficiency in health administrations. We assume that RBDM is dependant on decision makers' behaviour and the identification of the determinants of this behaviour can help to enhance research results utilization in health sector decision making. This paper explores the determinants of RBDM as a personal behaviour among managers and professionals in health administrations in Canada. From the behavioural theories and the existing literature, we build a model measuring "RBDM" as an index based on five items. These items refer to the steps accomplished by a decision maker while developing a decision which is based on evidence. The determinants of RBDM behaviour are identified using data collected from 942 health care decision makers in Canadian health organizations. Linear regression is used to model the behaviour RBDM. Determinants of this behaviour are derived from Triandis Theory and Bandura's construct "self-efficacy." The results suggest that to improve research use among managers in Canadian governmental health organizations, strategies should focus on enhancing exposition to evidence through facilitating communication networks, partnerships and links between researchers and decision makers, with the key long-term objective of developing a culture that supports and values the contribution that research can make to decision making in governmental health organizations. Nevertheless, depending on the organizational level, determinants of RBDM are different. This difference has to be taken into account if RBDM adoption is desired. Decision makers in Canadian health organizations (CHO) can help to build

  20. DNA microarray technique for detecting food-borne pathogens

    Directory of Open Access Journals (Sweden)

    Xing GAO

    2012-08-01

    Full Text Available Objective To study the application of DNA microarray technique for screening and identifying multiple food-borne pathogens. Methods The oligonucleotide probes were designed by Clustal X and Oligo 6.0 at the conserved regions of specific genes of multiple food-borne pathogens, and then were validated by bioinformatic analyses. The 5' end of each probe was modified by amino-group and 10 Poly-T, and the optimized probes were synthesized and spotted on aldehyde-coated slides. The bacteria DNA template incubated with Klenow enzyme was amplified by arbitrarily primed PCR, and PCR products incorporated into Aminoallyl-dUTP were coupled with fluorescent dye. After hybridization of the purified PCR products with DNA microarray, the hybridization image and fluorescence intensity analysis was acquired by ScanArray and GenePix Pro 5.1 software. A series of detection conditions such as arbitrarily primed PCR and microarray hybridization were optimized. The specificity of this approach was evaluated by 16 different bacteria DNA, and the sensitivity and reproducibility were verified by 4 food-borne pathogens DNA. The samples of multiple bacteria DNA and simulated water samples of Shigella dysenteriae were detected. Results Nine different food-borne bacteria were successfully discriminated under the same condition. The sensitivity of genomic DNA was 102 -103pg/ μl, and the coefficient of variation (CV of the reproducibility of assay was less than 15%. The corresponding specific hybridization maps of the multiple bacteria DNA samples were obtained, and the detection limit of simulated water sample of Shigella dysenteriae was 3.54×105cfu/ml. Conclusions The DNA microarray detection system based on arbitrarily primed PCR can be employed for effective detection of multiple food-borne pathogens, and this assay may offer a new method for high-throughput platform for detecting bacteria.

  1. A tiling microarray for global analysis of chloroplast genome expression in cucumber and other plants

    Directory of Open Access Journals (Sweden)

    Pląder Wojciech

    2011-09-01

    Full Text Available Abstract Plastids are small organelles equipped with their own genomes (plastomes. Although these organelles are involved in numerous plant metabolic pathways, current knowledge about the transcriptional activity of plastomes is limited. To solve this problem, we constructed a plastid tiling microarray (PlasTi-microarray consisting of 1629 oligonucleotide probes. The oligonucleotides were designed based on the cucumber chloroplast genomic sequence and targeted both strands of the plastome in a non-contiguous arrangement. Up to 4 specific probes were designed for each gene/exon, and the intergenic regions were covered regularly, with 70-nt intervals. We also developed a protocol for direct chemical labeling and hybridization of as little as 2 micrograms of chloroplast RNA. We used this protocol for profiling the expression of the cucumber chloroplast plastome on the PlasTi-microarray. Owing to the high sequence similarity of plant plastomes, the newly constructed microarray can be used to study plants other than cucumber. Comparative hybridization of chloroplast transcriptomes from cucumber, Arabidopsis, tomato and spinach showed that the PlasTi-microarray is highly versatile.

  2. Implementation of mutual information and bayes theorem for classification microarray data

    Science.gov (United States)

    Dwifebri Purbolaksono, Mahendra; Widiastuti, Kurnia C.; Syahrul Mubarok, Mohamad; Adiwijaya; Aminy Ma’ruf, Firda

    2018-03-01

    Microarray Technology is one of technology which able to read the structure of gen. The analysis is important for this technology. It is for deciding which attribute is more important than the others. Microarray technology is able to get cancer information to diagnose a person’s gen. Preparation of microarray data is a huge problem and takes a long time. That is because microarray data contains high number of insignificant and irrelevant attributes. So, it needs a method to reduce the dimension of microarray data without eliminating important information in every attribute. This research uses Mutual Information to reduce dimension. System is built with Machine Learning approach specifically Bayes Theorem. This theorem uses a statistical and probability approach. By combining both methods, it will be powerful for Microarray Data Classification. The experiment results show that system is good to classify Microarray data with highest F1-score using Bayesian Network by 91.06%, and Naïve Bayes by 88.85%.

  3. Universal Reference RNA as a standard for microarray experiments

    Directory of Open Access Journals (Sweden)

    Fero Michael

    2004-03-01

    Full Text Available Abstract Background Obtaining reliable and reproducible two-color microarray gene expression data is critically important for understanding the biological significance of perturbations made on a cellular system. Microarray design, RNA preparation and labeling, hybridization conditions and data acquisition and analysis are variables difficult to simultaneously control. A useful tool for monitoring and controlling intra- and inter-experimental variation is Universal Reference RNA (URR, developed with the goal of providing hybridization signal at each microarray probe location (spot. Measuring signal at each spot as the ratio of experimental RNA to reference RNA targets, rather than relying on absolute signal intensity, decreases variability by normalizing signal output in any two-color hybridization experiment. Results Human, mouse and rat URR (UHRR, UMRR and URRR, respectively were prepared from pools of RNA derived from individual cell lines representing different tissues. A variety of microarrays were used to determine percentage of spots hybridizing with URR and producing signal above a user defined threshold (microarray coverage. Microarray coverage was consistently greater than 80% for all arrays tested. We confirmed that individual cell lines contribute their own unique set of genes to URR, arguing for a pool of RNA from several cell lines as a better configuration for URR as opposed to a single cell line source for URR. Microarray coverage comparing two separately prepared batches each of UHRR, UMRR and URRR were highly correlated (Pearson's correlation coefficients of 0.97. Conclusion Results of this study demonstrate that large quantities of pooled RNA from individual cell lines are reproducibly prepared and possess diverse gene representation. This type of reference provides a standard for reducing variation in microarray experiments and allows more reliable comparison of gene expression data within and between experiments and

  4. Layered signaling regulatory networks analysis of gene expression involved in malignant tumorigenesis of non-resolving ulcerative colitis via integration of cross-study microarray profiles.

    Science.gov (United States)

    Fan, Shengjun; Pan, Zhenyu; Geng, Qiang; Li, Xin; Wang, Yefan; An, Yu; Xu, Yan; Tie, Lu; Pan, Yan; Li, Xuejun

    2013-01-01

    Ulcerative colitis (UC) was the most frequently diagnosed inflammatory bowel disease (IBD) and closely linked to colorectal carcinogenesis. By far, the underlying mechanisms associated with the disease are still unclear. With the increasing accumulation of microarray gene expression profiles, it is profitable to gain a systematic perspective based on gene regulatory networks to better elucidate the roles of genes associated with disorders. However, a major challenge for microarray data analysis is the integration of multiple-studies generated by different groups. In this study, firstly, we modeled a signaling regulatory network associated with colorectal cancer (CRC) initiation via integration of cross-study microarray expression data sets using Empirical Bayes (EB) algorithm. Secondly, a manually curated human cancer signaling map was established via comprehensive retrieval of the publicly available repositories. Finally, the co-differently-expressed genes were manually curated to portray the layered signaling regulatory networks. Overall, the remodeled signaling regulatory networks were separated into four major layers including extracellular, membrane, cytoplasm and nucleus, which led to the identification of five core biological processes and four signaling pathways associated with colorectal carcinogenesis. As a result, our biological interpretation highlighted the importance of EGF/EGFR signaling pathway, EPO signaling pathway, T cell signal transduction and members of the BCR signaling pathway, which were responsible for the malignant transition of CRC from the benign UC to the aggressive one. The present study illustrated a standardized normalization approach for cross-study microarray expression data sets. Our model for signaling networks construction was based on the experimentally-supported interaction and microarray co-expression modeling. Pathway-based signaling regulatory networks analysis sketched a directive insight into colorectal carcinogenesis

  5. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  6. Protocol-based care: the standardisation of decision-making?

    Science.gov (United States)

    Rycroft-Malone, Jo; Fontenla, Marina; Seers, Kate; Bick, Debra

    2009-05-01

    To explore how protocol-based care affects clinical decision-making. In the context of evidence-based practice, protocol-based care is a mechanism for facilitating the standardisation of care and streamlining decision-making through rationalising the information with which to make judgements and ultimately decisions. However, whether protocol-based care does, in the reality of practice, standardise decision-making is unknown. This paper reports on a study that explored the impact of protocol-based care on nurses' decision-making. Theoretically informed by realistic evaluation and the promoting action on research implementation in health services framework, a case study design using ethnographic methods was used. Two sites were purposively sampled; a diabetic and endocrine unit and a cardiac medical unit. Within each site, data collection included observation, postobservation semi-structured interviews with staff and patients, field notes, feedback sessions and document review. Data were inductively and thematically analysed. Decisions made by nurses in both sites were varied according to many different and interacting factors. While several standardised care approaches were available for use, in reality, a variety of information sources informed decision-making. The primary approach to knowledge exchange and acquisition was person-to-person; decision-making was a social activity. Rarely were standardised care approaches obviously referred to; nurses described following a mental flowchart, not necessarily linked to a particular guideline or protocol. When standardised care approaches were used, it was reported that they were used flexibly and particularised. While the logic of protocol-based care is algorithmic, in the reality of clinical practice, other sources of information supported nurses' decision-making process. This has significant implications for the political goal of standardisation. The successful implementation and judicious use of tools such as

  7. NMD Microarray Analysis for Rapid Genome-Wide Screen of Mutated Genes in Cancer

    Directory of Open Access Journals (Sweden)

    Maija Wolf

    2005-01-01

    Full Text Available Gene mutations play a critical role in cancer development and progression, and their identification offers possibilities for accurate diagnostics and therapeutic targeting. Finding genes undergoing mutations is challenging and slow, even in the post-genomic era. A new approach was recently developed by Noensie and Dietz to prioritize and focus the search, making use of nonsense-mediated mRNA decay (NMD inhibition and microarray analysis (NMD microarrays in the identification of transcripts containing nonsense mutations. We combined NMD microarrays with array-based CGH (comparative genomic hybridization in order to identify inactivation of tumor suppressor genes in cancer. Such a “mutatomics” screening of prostate cancer cell lines led to the identification of inactivating mutations in the EPHB2 gene. Up to 8% of metastatic uncultured prostate cancers also showed mutations of this gene whose loss of function may confer loss of tissue architecture. NMD microarray analysis could turn out to be a powerful research method to identify novel mutated genes in cancer cell lines, providing targets that could then be further investigated for their clinical relevance and therapeutic potential.

  8. Can subtle changes in gene expression be consistently detected with different microarray platforms?

    Directory of Open Access Journals (Sweden)

    Kuiper Rowan

    2008-03-01

    Full Text Available Abstract Background The comparability of gene expression data generated with different microarray platforms is still a matter of concern. Here we address the performance and the overlap in the detection of differentially expressed genes for five different microarray platforms in a challenging biological context where differences in gene expression are few and subtle. Results Gene expression profiles in the hippocampus of five wild-type and five transgenic δC-doublecortin-like kinase mice were evaluated with five microarray platforms: Applied Biosystems, Affymetrix, Agilent, Illumina, LGTC home-spotted arrays. Using a fixed false discovery rate of 10% we detected surprising differences between the number of differentially expressed genes per platform. Four genes were selected by ABI, 130 by Affymetrix, 3,051 by Agilent, 54 by Illumina, and 13 by LGTC. Two genes were found significantly differentially expressed by all platforms and the four genes identified by the ABI platform were found by at least three other platforms. Quantitative RT-PCR analysis confirmed 20 out of 28 of the genes detected by two or more platforms and 8 out of 15 of the genes detected by Agilent only. We observed improved correlations between platforms when ranking the genes based on the significance level than with a fixed statistical cut-off. We demonstrate significant overlap in the affected gene sets identified by the different platforms, although biological processes were represented by only partially overlapping sets of genes. Aberrances in GABA-ergic signalling in the transgenic mice were consistently found by all platforms. Conclusion The different microarray platforms give partially complementary views on biological processes affected. Our data indicate that when analyzing samples with only subtle differences in gene expression the use of two different platforms might be more attractive than increasing the number of replicates. Commercial two-color platforms seem to

  9. Scaling up success to improve health: Towards a rapid assessment guide for decision makers

    Directory of Open Access Journals (Sweden)

    Jason Paltzer

    2015-01-01

    Full Text Available Introduction Evidence-based health interventions exist and are effectively implemented throughout resource-limited settings. The literature regarding scale-up strategies and frameworks is growing. The purpose of this paper is to identify and systematically document the variation in scale-up strategies to develop a rapid assessment tool for decision-makers looking to identify the most appropriate strategy for their organizational and environmental contexts. Methods A list of scale-up strategies and frameworks were identified through an in-depth literature review and conversations with scale-up and quality improvement leaders. The literature search included a broad range of terms that might be used interchangeably with scale-up of best practices. Terms included: implementation research, knowledge translation, translational research, quality improvement research, health systems improvement, scale-up, best practices, improvement collaborative, and community based research. Based on this research, 18 strategies and frameworks were identified, and nine met our inclusion criteria for scale-up of health-related strategies. We interviewed the key contact for four of the nine strategies to obtain additional information regarding the strategy’s scale-up components, targets, underlying theories, evaluation efforts, facilitating factors, and barriers. A comparative analysis of common elements and strategy characteristics was completed by two of the authors on the nine selected strategies. Key strategy characteristics and common factors that facilitate or hinder the strategy’s success in scaling up health-related interventions were identified. Results Common features of scale-up strategies include: 1 the development of context-specific evidence; 2 collaborative partnerships; 3 iterative processes; and 4 shared decision-making. Facilitating factors include strong leadership, community engagement, communication, government collaboration, and a focus on

  10. Difference-based clustering of short time-course microarray data with replicates

    Directory of Open Access Journals (Sweden)

    Kim Jihoon

    2007-07-01

    Full Text Available Abstract Background There are some limitations associated with conventional clustering methods for short time-course gene expression data. The current algorithms require prior domain knowledge and do not incorporate information from replicates. Moreover, the results are not always easy to interpret biologically. Results We propose a novel algorithm for identifying a subset of genes sharing a significant temporal expression pattern when replicates are used. Our algorithm requires no prior knowledge, instead relying on an observed statistic which is based on the first and second order differences between adjacent time-points. Here, a pattern is predefined as the sequence of symbols indicating direction and the rate of change between time-points, and each gene is assigned to a cluster whose members share a similar pattern. We evaluated the performance of our algorithm to those of K-means, Self-Organizing Map and the Short Time-series Expression Miner methods. Conclusions Assessments using simulated and real data show that our method outperformed aforementioned algorithms. Our approach is an appropriate solution for clustering short time-course microarray data with replicates.

  11. Application of Bayesian Decision Theory Based on Prior Information in the Multi-Objective Optimization Problem

    Directory of Open Access Journals (Sweden)

    Xia Lei

    2010-12-01

    Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.

  12. Estimation of power lithium-ion battery SOC based on fuzzy optimal decision

    Science.gov (United States)

    He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.

  13. Protein microarray: sensitive and effective immunodetection for drug residues

    Directory of Open Access Journals (Sweden)

    Zer Cindy

    2010-02-01

    Full Text Available Abstract Background Veterinary drugs such as clenbuterol (CL and sulfamethazine (SM2 are low molecular weight ( Results The artificial antigens were spotted on microarray slides. Standard concentrations of the compounds were added to compete with the spotted antigens for binding to the antisera to determine the IC50. Our microarray assay showed the IC50 were 39.6 ng/ml for CL and 48.8 ng/ml for SM2, while the traditional competitive indirect-ELISA (ci-ELISA showed the IC50 were 190.7 ng/ml for CL and 156.7 ng/ml for SM2. We further validated the two methods with CL fortified chicken muscle tissues, and the protein microarray assay showed 90% recovery while the ci-ELISA had 76% recovery rate. When tested with CL-fed chicken muscle tissues, the protein microarray assay had higher sensitivity (0.9 ng/g than the ci-ELISA (0.1 ng/g for detection of CL residues. Conclusions The protein microarrays showed 4.5 and 3.5 times lower IC50 than the ci-ELISA detection for CL and SM2, respectively, suggesting that immunodetection of small molecules with protein microarray is a better approach than the traditional ELISA technique.

  14. Design and evaluation of Actichip, a thematic microarray for the study of the actin cytoskeleton

    Science.gov (United States)

    Muller, Jean; Mehlen, André; Vetter, Guillaume; Yatskou, Mikalai; Muller, Arnaud; Chalmel, Frédéric; Poch, Olivier; Friederich, Evelyne; Vallar, Laurent

    2007-01-01

    Background The actin cytoskeleton plays a crucial role in supporting and regulating numerous cellular processes. Mutations or alterations in the expression levels affecting the actin cytoskeleton system or related regulatory mechanisms are often associated with complex diseases such as cancer. Understanding how qualitative or quantitative changes in expression of the set of actin cytoskeleton genes are integrated to control actin dynamics and organisation is currently a challenge and should provide insights in identifying potential targets for drug discovery. Here we report the development of a dedicated microarray, the Actichip, containing 60-mer oligonucleotide probes for 327 genes selected for transcriptome analysis of the human actin cytoskeleton. Results Genomic data and sequence analysis features were retrieved from GenBank and stored in an integrative database called Actinome. From these data, probes were designed using a home-made program (CADO4MI) allowing sequence refinement and improved probe specificity by combining the complementary information recovered from the UniGene and RefSeq databases. Actichip performance was analysed by hybridisation with RNAs extracted from epithelial MCF-7 cells and human skeletal muscle. Using thoroughly standardised procedures, we obtained microarray images with excellent quality resulting in high data reproducibility. Actichip displayed a large dynamic range extending over three logs with a limit of sensitivity between one and ten copies of transcript per cell. The array allowed accurate detection of small changes in gene expression and reliable classification of samples based on the expression profiles of tissue-specific genes. When compared to two other oligonucleotide microarray platforms, Actichip showed similar sensitivity and concordant expression ratios. Moreover, Actichip was able to discriminate the highly similar actin isoforms whereas the two other platforms did not. Conclusion Our data demonstrate that

  15. Efficacy of a novel PCR- and microarray-based method in diagnosis of a prosthetic joint infection

    Science.gov (United States)

    2014-01-01

    Background and purpose Polymerase chain reaction (PCR) methods enable detection and species identification of many pathogens. We assessed the efficacy of a new PCR and microarray-based platform for detection of bacteria in prosthetic joint infections (PJIs). Methods This prospective study involved 61 suspected PJIs in hip and knee prostheses and 20 negative controls. 142 samples were analyzed by Prove-it Bone and Joint assay. The laboratory staff conducting the Prove-it analysis were not aware of the results of microbiological culture and clinical findings. The results of the analysis were compared with diagnosis of PJIs defined according to the Musculoskeletal Infection Society (MSIS) criteria and with the results of microbiological culture. Results 38 of 61 suspected PJIs met the definition of PJI according to the MSIS criteria. Of the 38 patients, the PCR detected bacteria in 31 whereas bacterial culture was positive in 28 patients. 15 of the PJI patients were undergoing antimicrobial treatment as the samples for analysis were obtained. When antimicrobial treatment had lasted 4 days or more, PCR detected bacteria in 6 of the 9 patients, but positive cultures were noted in only 2 of the 9 patients. All PCR results for the controls were negative. Of the 61 suspected PJIs, there were false-positive PCR results in 6 cases. Interpretation The Prove-it assay was helpful in PJI diagnostics during ongoing antimicrobial treatment. Without preceding treatment with antimicrobials, PCR and microarray-based assay did not appear to give any additional information over culture. PMID:24564748

  16. Microarrays in ecological research: A case study of a cDNA microarray for plant-herbivore interactions

    Directory of Open Access Journals (Sweden)

    Gase Klaus

    2004-09-01

    Full Text Available Abstract Background Microarray technology allows researchers to simultaneously monitor changes in the expression ratios (ERs of hundreds of genes and has thereby revolutionized most of biology. Although this technique has the potential of elucidating early stages in an organism's phenotypic response to complex ecological interactions, to date, it has not been fully incorporated into ecological research. This is partially due to a lack of simple procedures of handling and analyzing the expression ratio (ER data produced from microarrays. Results We describe an analysis of the sources of variation in ERs from 73 hybridized cDNA microarrays, each with 234 herbivory-elicited genes from the model ecological expression system, Nicotiana attenuata, using procedures that are commonly used in ecologic research. Each gene is represented by two independently labeled PCR products and each product was arrayed in quadruplicate. We present a robust method of normalizing and analyzing ERs based on arbitrary thresholds and statistical criteria, and characterize a "norm of reaction" of ERs for 6 genes (4 of known function, 2 of unknown with different ERs as determined across all analyzed arrays to provide a biologically-informed alternative to the use of arbitrary expression ratios in determining significance of expression. These gene-specific ERs and their variance (gene CV were used to calculate array-based variances (array CV, which, in turn, were used to study the effects of array age, probe cDNA quantity and quality, and quality of spotted PCR products as estimates of technical variation. Cluster analysis and a Principal Component Analysis (PCA were used to reveal associations among the transcriptional "imprints" of arrays hybridized with cDNA probes derived from mRNA from N. attenuata plants variously elicited and attacked by different herbivore species and from three congeners: N. quadrivalis, N. longiflora and N. clevelandii. Additionally, the PCA

  17. Maintenance Decision Based on Data Fusion of Aero Engines

    Directory of Open Access Journals (Sweden)

    Huawei Wang

    2013-01-01

    Full Text Available Maintenance has gained a great importance as a support function for ensuring aero engine reliability and availability. Cost-effectiveness and risk control are two basic criteria for accurate maintenance. Given that aero engines have much condition monitoring data, this paper presents a new condition-based maintenance decision system that employs data fusion for improving accuracy of reliability evaluation. Bayesian linear model has been applied, so that the performance degradation evaluation of aero engines could be realized. A reliability evaluation model has been presented based on gamma process, which achieves the accurate evaluation by information fusion. In reliability evaluation model, the shape parameter is estimated by the performance degradation evaluation result, and the scale parameter is estimated by failure, inspection, and repair information. What is more, with such reliability evaluation as input variables and by using particle swarm optimization (PSO, a stochastic optimization of maintenance decision for aircraft engines has been presented, in which the effectiveness and the accuracy are demonstrated by a numerical example.

  18. Evidence-based surgery: Dissemination, communication, decision aids

    NARCIS (Netherlands)

    Knops, A.M.

    2013-01-01

    Surgeons are expected to make treatment decisions that are based on the best available evidence. Moreover, they are called to recognise that important decisions should also be shared with patients. While dissemination of evidence-based surgery and communication of evidence to patients have been

  19. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support

    OpenAIRE

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Background Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians? experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mech...

  20. Identification and optimization of classifier genes from multi-class earthworm microarray dataset.

    Directory of Open Access Journals (Sweden)

    Ying Li

    Full Text Available Monitoring, assessment and prediction of environmental risks that chemicals pose demand rapid and accurate diagnostic assays. A variety of toxicological effects have been associated with explosive compounds TNT and RDX. One important goal of microarray experiments is to discover novel biomarkers for toxicity evaluation. We have developed an earthworm microarray containing 15,208 unique oligo probes and have used it to profile gene expression in 248 earthworms exposed to TNT, RDX or neither. We assembled a new machine learning pipeline consisting of several well-established feature filtering/selection and classification techniques to analyze the 248-array dataset in order to construct classifier models that can separate earthworm samples into three groups: control, TNT-treated, and RDX-treated. First, a total of 869 genes differentially expressed in response to TNT or RDX exposure were identified using a univariate statistical algorithm of class comparison. Then, decision tree-based algorithms were applied to select a subset of 354 classifier genes, which were ranked by their overall weight of significance. A multiclass support vector machine (MC-SVM method and an unsupervised K-mean clustering method were applied to independently refine the classifier, producing a smaller subset of 39 and 30 classifier genes, separately, with 11 common genes being potential biomarkers. The combined 58 genes were considered the refined subset and used to build MC-SVM and clustering models with classification accuracy of 83.5% and 56.9%, respectively. This study demonstrates that the machine learning approach can be used to identify and optimize a small subset of classifier/biomarker genes from high dimensional datasets and generate classification models of acceptable precision for multiple classes.

  1. Hesitant Fuzzy Thermodynamic Method for Emergency Decision Making Based on Prospect Theory.

    Science.gov (United States)

    Ren, Peijia; Xu, Zeshui; Hao, Zhinan

    2017-09-01

    Due to the timeliness of emergency response and much unknown information in emergency situations, this paper proposes a method to deal with the emergency decision making, which can comprehensively reflect the emergency decision making process. By utilizing the hesitant fuzzy elements to represent the fuzziness of the objects and the hesitant thought of the experts, this paper introduces the negative exponential function into the prospect theory so as to portray the psychological behaviors of the experts, which transforms the hesitant fuzzy decision matrix into the hesitant fuzzy prospect decision matrix (HFPDM) according to the expectation-levels. Then, this paper applies the energy and the entropy in thermodynamics to take the quantity and the quality of the decision values into account, and defines the thermodynamic decision making parameters based on the HFPDM. Accordingly, a whole procedure for emergency decision making is conducted. What is more, some experiments are designed to demonstrate and improve the validation of the emergency decision making procedure. Last but not the least, this paper makes a case study about the emergency decision making in the firing and exploding at Port Group in Tianjin Binhai New Area, which manifests the effectiveness and practicability of the proposed method.

  2. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    Science.gov (United States)

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Temperature Gradient Effect on Gas Discrimination Power of a Metal-Oxide Thin-Film Sensor Microarray

    Directory of Open Access Journals (Sweden)

    Joachim Goschnick

    2004-05-01

    Full Text Available Abstract: The paper presents results concerning the effect of spatial inhomogeneous operating temperature on the gas discrimination power of a gas-sensor microarray, with the latter based on a thin SnO2 film employed in the KAMINA electronic nose. Three different temperature distributions over the substrate are discussed: a nearly homogeneous one and two temperature gradients, equal to approx. 3.3 oC/mm and 6.7 oC/mm, applied across the sensor elements (segments of the array. The gas discrimination power of the microarray is judged by using the Mahalanobis distance in the LDA (Linear Discrimination Analysis coordinate system between the data clusters obtained by the response of the microarray to four target vapors: ethanol, acetone, propanol and ammonia. It is shown that the application of a temperature gradient increases the gas discrimination power of the microarray by up to 35 %.

  4. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  5. GOBO: gene expression-based outcome for breast cancer online.

    Directory of Open Access Journals (Sweden)

    Markus Ringnér

    Full Text Available Microarray-based gene expression analysis holds promise of improving prognostication and treatment decisions for breast cancer patients. However, the heterogeneity of breast cancer emphasizes the need for validation of prognostic gene signatures in larger sample sets stratified into relevant subgroups. Here, we describe a multifunctional user-friendly online tool, GOBO (http://co.bmc.lu.se/gobo, allowing a range of different analyses to be performed in an 1881-sample breast tumor data set, and a 51-sample breast cancer cell line set, both generated on Affymetrix U133A microarrays. GOBO supports a wide range of applications including: 1 rapid assessment of gene expression levels in subgroups of breast tumors and cell lines, 2 identification of co-expressed genes for creation of potential metagenes, 3 association with outcome for gene expression levels of single genes, sets of genes, or gene signatures in multiple subgroups of the 1881-sample breast cancer data set. The design and implementation of GOBO facilitate easy incorporation of additional query functions and applications, as well as additional data sets irrespective of tumor type and array platform.

  6. Comparing transformation methods for DNA microarray data

    Directory of Open Access Journals (Sweden)

    Zwinderman Aeilko H

    2004-06-01

    Full Text Available Abstract Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects, and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  7. Development and validation of a flax (Linum usitatissimum L.) gene expression oligo microarray.

    Science.gov (United States)

    Fenart, Stéphane; Ndong, Yves-Placide Assoumou; Duarte, Jorge; Rivière, Nathalie; Wilmer, Jeroen; van Wuytswinkel, Olivier; Lucau, Anca; Cariou, Emmanuelle; Neutelings, Godfrey; Gutierrez, Laurent; Chabbert, Brigitte; Guillot, Xavier; Tavernier, Reynald; Hawkins, Simon; Thomasset, Brigitte

    2010-10-21

    Flax (Linum usitatissimum L.) has been cultivated for around 9,000 years and is therefore one of the oldest cultivated species. Today, flax is still grown for its oil (oil-flax or linseed cultivars) and its cellulose-rich fibres (fibre-flax cultivars) used for high-value linen garments and composite materials. Despite the wide industrial use of flax-derived products, and our actual understanding of the regulation of both wood fibre production and oil biosynthesis more information must be acquired in both domains. Recent advances in genomics are now providing opportunities to improve our fundamental knowledge of these complex processes. In this paper we report the development and validation of a high-density oligo microarray platform dedicated to gene expression analyses in flax. Nine different RNA samples obtained from flax inner- and outer-stems, seeds, leaves and roots were used to generate a collection of 1,066,481 ESTs by massive parallel pyrosequencing. Sequences were assembled into 59,626 unigenes and 48,021 sequences were selected for oligo design and high-density microarray (Nimblegen 385K) fabrication with eight, non-overlapping 25-mers oligos per unigene. 18 independent experiments were used to evaluate the hybridization quality, precision, specificity and accuracy and all results confirmed the high technical quality of our microarray platform. Cross-validation of microarray data was carried out using quantitative qRT-PCR. Nine target genes were selected on the basis of microarray results and reflected the whole range of fold change (both up-regulated and down-regulated genes in different samples). A statistically significant positive correlation was obtained comparing expression levels for each target gene across all biological replicates both in qRT-PCR and microarray results. Further experiments illustrated the capacity of our arrays to detect differential gene expression in a variety of flax tissues as well as between two contrasted flax varieties

  8. Development and validation of a flax (Linum usitatissimum L. gene expression oligo microarray

    Directory of Open Access Journals (Sweden)

    Gutierrez Laurent

    2010-10-01

    Full Text Available Abstract Background Flax (Linum usitatissimum L. has been cultivated for around 9,000 years and is therefore one of the oldest cultivated species. Today, flax is still grown for its oil (oil-flax or linseed cultivars and its cellulose-rich fibres (fibre-flax cultivars used for high-value linen garments and composite materials. Despite the wide industrial use of flax-derived products, and our actual understanding of the regulation of both wood fibre production and oil biosynthesis more information must be acquired in both domains. Recent advances in genomics are now providing opportunities to improve our fundamental knowledge of these complex processes. In this paper we report the development and validation of a high-density oligo microarray platform dedicated to gene expression analyses in flax. Results Nine different RNA samples obtained from flax inner- and outer-stems, seeds, leaves and roots were used to generate a collection of 1,066,481 ESTs by massive parallel pyrosequencing. Sequences were assembled into 59,626 unigenes and 48,021 sequences were selected for oligo design and high-density microarray (Nimblegen 385K fabrication with eight, non-overlapping 25-mers oligos per unigene. 18 independent experiments were used to evaluate the hybridization quality, precision, specificity and accuracy and all results confirmed the high technical quality of our microarray platform. Cross-validation of microarray data was carried out using quantitative qRT-PCR. Nine target genes were selected on the basis of microarray results and reflected the whole range of fold change (both up-regulated and down-regulated genes in different samples. A statistically significant positive correlation was obtained comparing expression levels for each target gene across all biological replicates both in qRT-PCR and microarray results. Further experiments illustrated the capacity of our arrays to detect differential gene expression in a variety of flax tissues as well

  9. Experiential Knowledge Complements an LCA-Based Decision Support Framework

    Directory of Open Access Journals (Sweden)

    Heng Yi Teah

    2015-09-01

    Full Text Available A shrimp farmer in Taiwan practices innovation through trial-and-error for better income and a better environment, but such farmer-based innovation sometimes fails because the biological mechanism is unclear. Systematic field experimentation and laboratory research are often too costly, and simulating ground conditions is often too challenging. To solve this dilemma, we propose a decision support framework that explicitly utilizes farmer experiential knowledge through a participatory approach to alternatively estimate prospective change in shrimp farming productivity, and to co-design options for improvement. Data obtained from the farmer enable us to quantitatively analyze the production cost and greenhouse gas (GHG emission with a life cycle assessment (LCA methodology. We used semi-quantitative graphical representations of indifference curves and mixing triangles to compare and show better options for the farmer. Our results empower the farmer to make decisions more systematically and reliably based on the frequency of heterotrophic bacteria application and the revision of feed input. We argue that experiential knowledge may be less accurate due to its dependence on varying levels of farmer experience, but this knowledge is a reasonable alternative for immediate decision-making. More importantly, our developed framework advances the scope of LCA application to support practically important yet scientifically uncertain cases.

  10. Normalization and gene p-value estimation: issues in microarray data processing.

    Science.gov (United States)

    Fundel, Katrin; Küffner, Robert; Aigner, Thomas; Zimmer, Ralf

    2008-05-28

    Numerous methods exist for basic processing, e.g. normalization, of microarray gene expression data. These methods have an important effect on the final analysis outcome. Therefore, it is crucial to select methods appropriate for a given dataset in order to assure the validity and reliability of expression data analysis. Furthermore, biological interpretation requires expression values for genes, which are often represented by several spots or probe sets on a microarray. How to best integrate spot/probe set values into gene values has so far been a somewhat neglected problem. We present a case study comparing different between-array normalization methods with respect to the identification of differentially expressed genes. Our results show that it is feasible and necessary to use prior knowledge on gene expression measurements to select an adequate normalization method for the given data. Furthermore, we provide evidence that combining spot/probe set p-values into gene p-values for detecting differentially expressed genes has advantages compared to combining expression values for spots/probe sets into gene expression values. The comparison of different methods suggests to use Stouffer's method for this purpose. The study has been conducted on gene expression experiments investigating human joint cartilage samples of osteoarthritis related groups: a cDNA microarray (83 samples, four groups) and an Affymetrix (26 samples, two groups) data set. The apparently straight forward steps of gene expression data analysis, e.g. between-array normalization and detection of differentially regulated genes, can be accomplished by numerous different methods. We analyzed multiple methods and the possible effects and thereby demonstrate the importance of the single decisions taken during data processing. We give guidelines for evaluating normalization outcomes. An overview of these effects via appropriate measures and plots compared to prior knowledge is essential for the biological

  11. 73 Activity Based Costing and Product Pricing Decision: the Nigerian Case

    Directory of Open Access Journals (Sweden)

    Ebipanipre Gabriel Mieseigha

    2014-06-01

    Full Text Available This paper examined activity based costing and product pricing decisions in Nigeria so as to ascertain whether activity based costing have the ability to enhance profitability and control cost of manufacturing firms. Towards this end, a multiple correlation and regression estimation technique was used in analyzing the data obtained in the study. The study found that activity based costing affects product costing and pricing decision. In addition, the results showed that improved profitability and cost control can be achieved by implementing activity based costing approach by manufacturing firms. The implication is that traditional costing approach fails in many pricing situations by arbitrarily allocating indirect cost and activity based costing helps in allocating indirect cost accurately. Thus, it was recommended amongst others that activity based costing need to be practiced, maintained and implemented by manufacturing firms since it has a broad range of uses for a wide variety of company functions and operations in the area of process analysis, strategy support, time-based accounting, monitoring wastage, as well as quality and productivity management.

  12. Decision Aggregation in Distributed Classification by a Transductive Extension of Maximum Entropy/Improved Iterative Scaling

    Directory of Open Access Journals (Sweden)

    George Kesidis

    2008-06-01

    Full Text Available In many ensemble classification paradigms, the function which combines local/base classifier decisions is learned in a supervised fashion. Such methods require common labeled training examples across the classifier ensemble. However, in some scenarios, where an ensemble solution is necessitated, common labeled data may not exist: (i legacy/proprietary classifiers, and (ii spatially distributed and/or multiple modality sensors. In such cases, it is standard to apply fixed (untrained decision aggregation such as voting, averaging, or naive Bayes rules. In recent work, an alternative transductive learning strategy was proposed. There, decisions on test samples were chosen aiming to satisfy constraints measured by each local classifier. This approach was shown to reliably correct for class prior mismatch and to robustly account for classifier dependencies. Significant gains in accuracy over fixed aggregation rules were demonstrated. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here, we overcome these problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach and fixed rules on a number of UC Irvine datasets.

  13. Development of an evidence-based decision pathway for vestibular schwannoma treatment options.

    Science.gov (United States)

    Linkov, Faina; Valappil, Benita; McAfee, Jacob; Goughnour, Sharon L; Hildrew, Douglas M; McCall, Andrew A; Linkov, Igor; Hirsch, Barry; Snyderman, Carl

    To integrate multiple sources of clinical information with patient feedback to build evidence-based decision support model to facilitate treatment selection for patients suffering from vestibular schwannomas (VS). This was a mixed methods study utilizing focus group and survey methodology to solicit feedback on factors important for making treatment decisions among patients. Two 90-minute focus groups were conducted by an experienced facilitator. Previously diagnosed VS patients were recruited by clinical investigators at the University of Pittsburgh Medical Center (UPMC). Classical content analysis was used for focus group data analysis. Providers were recruited from practices within the UPMC system and were surveyed using Delphi methods. This information can provide a basis for multi-criteria decision analysis (MCDA) framework to develop a treatment decision support system for patients with VS. Eight themes were derived from these data (focus group + surveys): doctor/health care system, side effects, effectiveness of treatment, anxiety, mortality, family/other people, quality of life, and post-operative symptoms. These data, as well as feedback from physicians were utilized in building a multi-criteria decision model. The study illustrated steps involved in the development of a decision support model that integrates evidence-based data and patient values to select treatment alternatives. Studies focusing on the actual development of the decision support technology for this group of patients are needed, as decisions are highly multifactorial. Such tools have the potential to improve decision making for complex medical problems with alternate treatment pathways. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. An Integrated Web-based Decision Support System in Disaster Risk Management

    Science.gov (United States)

    Aye, Z. C.; Jaboyedoff, M.; Derron, M. H.

    2012-04-01

    Nowadays, web based decision support systems (DSS) play an essential role in disaster risk management because of their supporting abilities which help the decision makers to improve their performances and make better decisions without needing to solve complex problems while reducing human resources and time. Since the decision making process is one of the main factors which highly influence the damages and losses of society, it is extremely important to make right decisions at right time by combining available risk information with advanced web technology of Geographic Information System (GIS) and Decision Support System (DSS). This paper presents an integrated web-based decision support system (DSS) of how to use risk information in risk management efficiently and effectively while highlighting the importance of a decision support system in the field of risk reduction. Beyond the conventional systems, it provides the users to define their own strategies starting from risk identification to the risk reduction, which leads to an integrated approach in risk management. In addition, it also considers the complexity of changing environment from different perspectives and sectors with diverse stakeholders' involvement in the development process. The aim of this platform is to contribute a part towards the natural hazards and geosciences society by developing an open-source web platform where the users can analyze risk profiles and make decisions by performing cost benefit analysis, Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) with the support of others tools and resources provided. There are different access rights to the system depending on the user profiles and their responsibilities. The system is still under development and the current version provides maps viewing, basic GIS functionality, assessment of important infrastructures (e.g. bridge, hospital, etc.) affected by landslides and visualization of the impact

  15. Shared Decisions That Count.

    Science.gov (United States)

    Schlechty, Phillip C.

    1993-01-01

    Advocates of participatory leadership, site-based management, and decentralization often assume that changing decision-making group composition will automatically improve the quality of decisions being made. Stakeholder satisfaction does not guarantee quality results. This article offers a framework for moving the decision-making discussion from…

  16. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  17. Optimizing perioperative decision making: improved information for clinical workflow planning.

    Science.gov (United States)

    Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph

    2012-01-01

    Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.

  18. Bioinformatics and Microarray Data Analysis on the Cloud.

    Science.gov (United States)

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  19. An Overview of DNA Microarray Grid Alignment and Foreground Separation Approaches

    Directory of Open Access Journals (Sweden)

    Bajcsy Peter

    2006-01-01

    Full Text Available This paper overviews DNA microarray grid alignment and foreground separation approaches. Microarray grid alignment and foreground separation are the basic processing steps of DNA microarray images that affect the quality of gene expression information, and hence impact our confidence in any data-derived biological conclusions. Thus, understanding microarray data processing steps becomes critical for performing optimal microarray data analysis. In the past, the grid alignment and foreground separation steps have not been covered extensively in the survey literature. We present several classifications of existing algorithms, and describe the fundamental principles of these algorithms. Challenges related to automation and reliability of processed image data are outlined at the end of this overview paper.

  20. Global view of the mechanisms of improved learning and memory capability in mice with music-exposure by microarray.

    Science.gov (United States)

    Meng, Bo; Zhu, Shujia; Li, Shijia; Zeng, Qingwen; Mei, Bing

    2009-08-28

    Music has been proved beneficial to improve learning and memory in many species including human in previous research work. Although some genes have been identified to contribute to the mechanisms, it is believed that the effect of music is manifold, behind which must concern a complex regulation network. To further understand the mechanisms, we exposed the mice to classical music for one month. The subsequent behavioral experiments showed improvement of spatial learning capability and elevation of fear-motivated memory in the mice with music-exposure as compared to the naïve mice. Meanwhile, we applied the microarray to compare the gene expression profiles of the hippocampus and cortex between the mice with music-exposure and the naïve mice. The results showed approximately 454 genes in cortex (200 genes up-regulated and 254 genes down-regulated) and 437 genes in hippocampus (256 genes up-regulated and 181 genes down-regulated) were significantly affected in music-exposing mice, which mainly involved in ion channel activity and/or synaptic transmission, cytoskeleton, development, transcription, hormone activity. Our work may provide some hints for better understanding the effects of music on learning and memory.

  1. Genes involved in immunity and apoptosis are associated with human presbycusis based on microarray analysis.

    Science.gov (United States)

    Dong, Yang; Li, Ming; Liu, Puzhao; Song, Haiyan; Zhao, Yuping; Shi, Jianrong

    2014-06-01

    Genes involved in immunity and apoptosis were associated with human presbycusis. CCR3 and GILZ played an important role in the pathogenesis of presbycusis, probably through regulating chemokine receptor, T-cell apoptosis, or T-cell activation pathways. To identify genes associated with human presbycusis and explore the molecular mechanism of presbycusis. Hearing function was tested by pure-tone audiometry. Microarray analysis was performed to identify presbycusis-correlated genes by Illumina Human-6 BeadChip using the peripheral blood samples of subjects. To identify biological process categories and pathways associated with presbycusis-correlated genes, bioinformatics analysis was carried out by Gene Ontology Tree Machine (GOTM) and database for annotation, visualization, and integrated discovery (DAVID). Quantitative RT-PCR (qRT-PCR) was used to validate the microarray data. Microarray analysis identified 469 up-regulated genes and 323 down-regulated genes. Both the dominant biological processes by Gene Ontology (GO) analysis and the enriched pathways by Kyoto encyclopedia of genes and genomes (KEGG) and BIOCARTA showed that genes involved in immunity and apoptosis were associated with presbycusis. In addition, CCR3, GILZ, CXCL10, and CX3CR1 genes showed consistent difference between groups for both the gene chip and qRT-PCR data. The differences of CCR3 and GILZ between presbycusis patients and controls were statistically significant (p < 0.05).

  2. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification

    Directory of Open Access Journals (Sweden)

    Wang Lily

    2008-07-01

    Full Text Available Abstract Background Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of gene expression microarray technology with several molecular signatures on their way toward clinical deployment. Use of the most accurate classification algorithms available for microarray gene expression data is a critical ingredient in order to develop the best possible molecular signatures for patient care. As suggested by a large body of literature to date, support vector machines can be considered "best of class" algorithms for classification of such data. Recent work, however, suggests that random forest classifiers may outperform support vector machines in this domain. Results In the present paper we identify methodological biases of prior work comparing random forests and support vector machines and conduct a new rigorous evaluation of the two algorithms that corrects these limitations. Our experiments use 22 diagnostic and prognostic datasets and show that support vector machines outperform random forests, often by a large margin. Our data also underlines the importance of sound research design in benchmarking and comparison of bioinformatics algorithms. Conclusion We found that both on average and in the majority of microarray datasets, random forests are outperformed by support vector machines both in the settings when no gene selection is performed and when several popular gene selection methods are used.

  3. Improving Land Use/Land Cover Classification by Integrating Pixel Unmixing and Decision Tree Methods

    Directory of Open Access Journals (Sweden)

    Chao Yang

    2017-11-01

    Full Text Available Decision tree classification is one of the most efficient methods for obtaining land use/land cover (LULC information from remotely sensed imageries. However, traditional decision tree classification methods cannot effectively eliminate the influence of mixed pixels. This study aimed to integrate pixel unmixing and decision tree to improve LULC classification by removing mixed pixel influence. The abundance and minimum noise fraction (MNF results that were obtained from mixed pixel decomposition were added to decision tree multi-features using a three-dimensional (3D Terrain model, which was created using an image fusion digital elevation model (DEM, to select training samples (ROIs, and improve ROI separability. A Landsat-8 OLI image of the Yunlong Reservoir Basin in Kunming was used to test this proposed method. Study results showed that the Kappa coefficient and the overall accuracy of integrated pixel unmixing and decision tree method increased by 0.093% and 10%, respectively, as compared with the original decision tree method. This proposed method could effectively eliminate the influence of mixed pixels and improve the accuracy in complex LULC classifications.

  4. An economic decision framework using modeling for improving aquifer remediation design

    International Nuclear Information System (INIS)

    James, B.R.; Gwo, J.P.; Toran, L.E.

    1995-11-01

    Reducing cost is a critical challenge facing environmental remediation today. One of the most effective ways of reducing costs is to improve decision-making. This can range from choosing more cost- effective remediation alternatives (for example, determining whether a groundwater contamination plume should be remediated or not) to improving data collection (for example, determining when data collection should stoop). Uncertainty in site conditions presents a major challenge for effective decision-making. We present a framework for increasing the effectiveness of remedial design decision-making at groundwater contamination sites where there is uncertainty in many parameters that affect remediation design. The objective is to provide an easy-to-use economic framework for making remediation decisions. The presented framework is used to 1) select the best remedial design from a suite of possible ones, 2) estimate if additional data collection is cost-effective, and 3) determine the most important parameters to be sampled. The framework is developed by combining elements from Latin-Hypercube simulation of contaminant transport, economic risk-cost-benefit analysis, and Regional Sensitivity Analysis (RSA)

  5. Improving shared decision-making in chronic lymphocytic leukemia through multidisciplinary education.

    Science.gov (United States)

    Rocque, Gabrielle B; Williams, Courtney P; Halilova, Karina I; Borate, Uma; Jackson, Bradford E; Van Laar, Emily S; Pisu, Maria; Butler, Thomas W; Davis, Randall S; Mehta, Amitkumar; Knight, Sara J; Safford, Monika M

    2018-03-01

    New treatments for chronic lymphocytic leukemia (CLL) with excellent response rates and varying toxicity profiles have emerged in recent years, creating an opportunity for a patient's personal preferences to contribute to treatment decisions. We conducted a prospective, quasi-experimental pre- and post-evaluation of a multilevel educational program and its impact on knowledge of CLL and shared decision-making (SDM). We educated patients, lay navigators, nurses/advanced practice providers (APPs), and physicians. Patients were evaluated for change in patient activation, distress, desired role in decision-making, perception of decision-making, satisfaction with oncologist explanation of treatment choice, and knowledge of CLL. Lay navigators, nurses/APPs, and physicians were evaluated for change in CLL knowledge and perception of decision-making. Forty-four patients, 33 lay navigators, 27 nurses/APPs, and 27 physicians participated in the educational program. We observed trends toward improved patient activation, with 68% before education versus 76% after education reporting a Patient Activation Measure (PAM) score of 3 or 4. The percentage of patients desiring and perceiving SDM trended upward from 47% to 67% and from 35% to 49%, respectively. The percentage of patients understanding that CLL is incurable increased from 80% to 90%, as did reporting awareness of signs of progression (64% to 76%). Patients' satisfaction with their oncologists' explanations of therapy increased significantly from 83% to 95% (p = .03). CLL knowledge increased after education for lay navigators (36% vs 63%) and nurses/APPs (35% vs 69%), and remained high for physicians (85% vs 87%). Nurses/APPs and physicians perceived at least some patient involvement in decision-making at baseline, whereas 12% of patients and 23% of lay navigators perceived that physicians made decisions independently. This project demonstrated trends toward improvements in patient engagement, prognostic awareness

  6. The image recognition based on neural network and Bayesian decision

    Science.gov (United States)

    Wang, Chugege

    2018-04-01

    The artificial neural network began in 1940, which is an important part of artificial intelligence. At present, it has become a hot topic in the fields of neuroscience, computer science, brain science, mathematics, and psychology. Thomas Bayes firstly reported the Bayesian theory in 1763. After the development in the twentieth century, it has been widespread in all areas of statistics. In recent years, due to the solution of the problem of high-dimensional integral calculation, Bayesian Statistics has been improved theoretically, which solved many problems that cannot be solved by classical statistics and is also applied to the interdisciplinary fields. In this paper, the related concepts and principles of the artificial neural network are introduced. It also summarizes the basic content and principle of Bayesian Statistics, and combines the artificial neural network technology and Bayesian decision theory and implement them in all aspects of image recognition, such as enhanced face detection method based on neural network and Bayesian decision, as well as the image classification based on the Bayesian decision. It can be seen that the combination of artificial intelligence and statistical algorithms has always been the hot research topic.

  7. Recursive SVM feature selection and sample classification for mass-spectrometry and microarray data

    Directory of Open Access Journals (Sweden)

    Harris Lyndsay N

    2006-04-01

    Full Text Available Abstract Background Like microarray-based investigations, high-throughput proteomics techniques require machine learning algorithms to identify biomarkers that are informative for biological classification problems. Feature selection and classification algorithms need to be robust to noise and outliers in the data. Results We developed a recursive support vector machine (R-SVM algorithm to select important genes/biomarkers for the classification of noisy data. We compared its performance to a similar, state-of-the-art method (SVM recursive feature elimination or SVM-RFE, paying special attention to the ability of recovering the true informative genes/biomarkers and the robustness to outliers in the data. Simulation experiments show that a 5 %-~20 % improvement over SVM-RFE can be achieved regard to these properties. The SVM-based methods are also compared with a conventional univariate method and their respective strengths and weaknesses are discussed. R-SVM was applied to two sets of SELDI-TOF-MS proteomics data, one from a human breast cancer study and the other from a study on rat liver cirrhosis. Important biomarkers found by the algorithm were validated by follow-up biological experiments. Conclusion The proposed R-SVM method is suitable for analyzing noisy high-throughput proteomics and microarray data and it outperforms SVM-RFE in the robustness to noise and in the ability to recover informative features. The multivariate SVM-based method outperforms the univariate method in the classification performance, but univariate methods can reveal more of the differentially expressed features especially when there are correlations between the features.

  8. A microarray-based analysis of gametogenesis in two Portuguese populations of the European clam Ruditapes decussatus.

    Directory of Open Access Journals (Sweden)

    Joana Teixeira de Sousa

    Full Text Available The European clam, Ruditapes decussatus is a species with a high commercial importance in Portugal and other Southern European countries. Its production is almost exclusively based on natural recruitment, which is subject to high annual fluctuations. Increased knowledge of the natural reproductive cycle of R. decussatus and its molecular mechanisms would be particularly important in providing new highly valuable genomic information for better understanding the regulation of reproduction in this economically important aquaculture species. In this study, the transcriptomic bases of R. decussatus reproduction have been analysed using a custom oligonucleotide microarray representing 51,678 assembled contigs. Microarray analyses were performed in four gonadal maturation stages from two different Portuguese wild populations, characterized by different responses to spawning induction when used as progenitors in hatchery. A comparison between the two populations elucidated a specific pathway involved in the recognition signals and binding between the oocyte and components of the sperm plasma membrane. We suggest that this pathway can explain part of the differences in terms of spawning induction success between the two populations. In addition, sexes and reproductive stages were compared and a correlation between mRNA levels and gonadal area was investigated. The lists of differentially expressed genes revealed that sex explains most of the variance in gonadal gene expression. Additionally, genes like Foxl2, vitellogenin, condensing 2, mitotic apparatus protein p62, Cep57, sperm associated antigens 6, 16 and 17, motile sperm domain containing protein 2, sperm surface protein Sp17, sperm flagellar proteins 1 and 2 and dpy-30, were identified as being correlated with the gonad area and therefore supposedly with the number and/or the size of the gametes produced.

  9. Analyzing Multiple-Probe Microarray: Estimation and Application of Gene Expression Indexes

    KAUST Repository

    Maadooliat, Mehdi

    2012-07-26

    Gene expression index estimation is an essential step in analyzing multiple probe microarray data. Various modeling methods have been proposed in this area. Amidst all, a popular method proposed in Li and Wong (2001) is based on a multiplicative model, which is similar to the additive model discussed in Irizarry et al. (2003a) at the logarithm scale. Along this line, Hu et al. (2006) proposed data transformation to improve expression index estimation based on an ad hoc entropy criteria and naive grid search approach. In this work, we re-examined this problem using a new profile likelihood-based transformation estimation approach that is more statistically elegant and computationally efficient. We demonstrate the applicability of the proposed method using a benchmark Affymetrix U95A spiked-in experiment. Moreover, We introduced a new multivariate expression index and used the empirical study to shows its promise in terms of improving model fitting and power of detecting differential expression over the commonly used univariate expression index. As the other important content of the work, we discussed two generally encountered practical issues in application of gene expression index: normalization and summary statistic used for detecting differential expression. Our empirical study shows somewhat different findings from the MAQC project (MAQC, 2006).

  10. Universal ligation-detection-reaction microarray applied for compost microbes

    Directory of Open Access Journals (Sweden)

    Romantschuk Martin

    2008-12-01

    Full Text Available Abstract Background Composting is one of the methods utilised in recycling organic communal waste. The composting process is dependent on aerobic microbial activity and proceeds through a succession of different phases each dominated by certain microorganisms. In this study, a ligation-detection-reaction (LDR based microarray method was adapted for species-level detection of compost microbes characteristic of each stage of the composting process. LDR utilises the specificity of the ligase enzyme to covalently join two adjacently hybridised probes. A zip-oligo is attached to the 3'-end of one probe and fluorescent label to the 5'-end of the other probe. Upon ligation, the probes are combined in the same molecule and can be detected in a specific location on a universal microarray with complementary zip-oligos enabling equivalent hybridisation conditions for all probes. The method was applied to samples from Nordic composting facilities after testing and optimisation with fungal pure cultures and environmental clones. Results Probes targeted for fungi were able to detect 0.1 fmol of target ribosomal PCR product in an artificial reaction mixture containing 100 ng competing fungal ribosomal internal transcribed spacer (ITS area or herring sperm DNA. The detection level was therefore approximately 0.04% of total DNA. Clone libraries were constructed from eight compost samples. The LDR microarray results were in concordance with the clone library sequencing results. In addition a control probe was used to monitor the per-spot hybridisation efficiency on the array. Conclusion This study demonstrates that the LDR microarray method is capable of sensitive and accurate species-level detection from a complex microbial community. The method can detect key species from compost samples, making it a basis for a tool for compost process monitoring in industrial facilities.

  11. Microarrays in brain research: the good, the bad and the ugly.

    Science.gov (United States)

    Mirnics, K

    2001-06-01

    Making sense of microarray data is a complex process, in which the interpretation of findings will depend on the overall experimental design and judgement of the investigator performing the analysis. As a result, differences in tissue harvesting, microarray types, sample labelling and data analysis procedures make post hoc sharing of microarray data a great challenge. To ensure rapid and meaningful data exchange, we need to create some order out of the existing chaos. In these ground-breaking microarray standardization and data sharing efforts, NIH agencies should take a leading role

  12. Performance improvement of 64-QAM coherent optical communication system by optimizing symbol decision boundary based on support vector machine

    Science.gov (United States)

    Chen, Wei; Zhang, Junfeng; Gao, Mingyi; Shen, Gangxiang

    2018-03-01

    High-order modulation signals are suited for high-capacity communication systems because of their high spectral efficiency, but they are more vulnerable to various impairments. For the signals that experience degradation, when symbol points overlap on the constellation diagram, the original linear decision boundary cannot be used to distinguish the classification of symbol. Therefore, it is advantageous to create an optimum symbol decision boundary for the degraded signals. In this work, we experimentally demonstrated the 64-quadrature-amplitude modulation (64-QAM) coherent optical communication system using support-vector machine (SVM) decision boundary algorithm to create the optimum symbol decision boundary for improving the system performance. We investigated the influence of various impairments on the 64-QAM coherent optical communication systems, such as the impairments caused by modulator nonlinearity, phase skew between in-phase (I) arm and quadrature-phase (Q) arm of the modulator, fiber Kerr nonlinearity and amplified spontaneous emission (ASE) noise. We measured the bit-error-ratio (BER) performance of 75-Gb/s 64-QAM signals in the back-to-back and 50-km transmission. By using SVM to optimize symbol decision boundary, the impairments caused by I/Q phase skew of the modulator, fiber Kerr nonlinearity and ASE noise are greatly mitigated.

  13. Decision Support Systems and the Conflict Model of Decision Making: A Stimulus for New Computer-Assisted Careers Guidance Systems.

    Science.gov (United States)

    Ballantine, R. Malcolm

    Decision Support Systems (DSSs) are computer-based decision aids to use when making decisions which are partially amenable to rational decision-making procedures but contain elements where intuitive judgment is an essential component. In such situations, DSSs are used to improve the quality of decision-making. The DSS approach is based on Simon's…

  14. High-Dimensional Additive Hazards Regression for Oral Squamous Cell Carcinoma Using Microarray Data: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Omid Hamidi

    2014-01-01

    Full Text Available Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant (P<0.001. The Lasso showed maximum median of area under ROC curve over time (0.95 and smoothly clipped absolute deviation showed the lowest prediction error (0.105. It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.

  15. MiMiR: a comprehensive solution for storage, annotation and exchange of microarray data

    Directory of Open Access Journals (Sweden)

    Rahman Fatimah

    2005-11-01

    Full Text Available Abstract Background The generation of large amounts of microarray data presents challenges for data collection, annotation, exchange and analysis. Although there are now widely accepted formats, minimum standards for data content and ontologies for microarray data, only a few groups are using them together to build and populate large-scale databases. Structured environments for data management are crucial for making full use of these data. Description The MiMiR database provides a comprehensive infrastructure for microarray data annotation, storage and exchange and is based on the MAGE format. MiMiR is MIAME-supportive, customised for use with data generated on the Affymetrix platform and includes a tool for data annotation using ontologies. Detailed information on the experiment, methods, reagents and signal intensity data can be captured in a systematic format. Reports screens permit the user to query the database, to view annotation on individual experiments and provide summary statistics. MiMiR has tools for automatic upload of the data from the microarray scanner and export to databases using MAGE-ML. Conclusion MiMiR facilitates microarray data management, annotation and exchange, in line with international guidelines. The database is valuable for underpinning research activities and promotes a systematic approach to data handling. Copies of MiMiR are freely available to academic groups under licence.

  16. Risk-Based Decision Making for Deterioration Processes Using POMDP

    DEFF Research Database (Denmark)

    Nielsen, Jannie Sønderkær; Sørensen, John Dalsgaard

    2015-01-01

    This paper proposes a method for risk-based decision making for maintenance of deteriorating components, based on the partially observable Markov decision process (POMDP). Unlike most methods, the decision polices do not need to be stationary and can vary according to seasons and near the end...

  17. Immunohistochemistry - Microarray Analysis of Patients with Peritoneal Metastases of Appendiceal or Colorectal Origin

    Directory of Open Access Journals (Sweden)

    Danielle E Green

    2015-01-01

    Full Text Available BackgroundThe value of immunohistochemistry (IHC-microarray analysis of pathological specimens in the management of patients is controversial although preliminary data suggests potential benefit. We describe the characteristics of patients undergoing a commercially available IHC-microarray method in patients with peritoneal metastases (PM and the feasibility of this technique in this population.MethodsWe retrospectively analyzed consecutive patients with pathologically confirmed PM from appendiceal or colorectal primary who underwent Caris Molecular IntelligenceTM testing. IHC, microarray, FISH and mutational analysis were included and stratified by PCI score, histology and treatment characteristics. Statistical analysis was performed using non-parametric tests.ResultsOur study included 5 patients with appendiceal and 11 with colorectal PM. The median age of patients was 51 (IQR 39-65 years, with 11(68% female. The median PCI score of the patients was 17(IQR 10-25. Hyperthermic intra-peritoneal chemoperfusion (HIPEC was performed in 4 (80% patients with appendiceal primary tumors and 4 (36% with colorectal primary. KRAS mutations were encountered in 40% of appendiceal vs. 30% colorectal tumors, while BRAF mutations were seen in 40% of colorectal PM and none of the patients with appendiceal PM (p=0.06. IHC biomarker expression was not significantly different between the two primaries. Sufficient tumor for microarray analysis was found in 44% (n=7 patients, which was not associated with previous use of chemotherapy (p>0.20 for 5-FU/LV, Irinotecan and Oxaliplatin.ConclusionsIn a small sample of patients with peritoneal metastases, the feasibility and results of IHC-microarray staining based on a commercially available test is reported. The apparent high incidence of the BRAF mutation in patients with PM may potentially offer opportunities for novel therapeutics and suggest that IHC-microarray is a method that can be used in this population.

  18. Evaluation of artificial time series microarray data for dynamic gene regulatory network inference.

    Science.gov (United States)

    Xenitidis, P; Seimenis, I; Kakolyris, S; Adamopoulos, A

    2017-08-07

    High-throughput technology like microarrays is widely used in the inference of gene regulatory networks (GRNs). We focused on time series data since we are interested in the dynamics of GRNs and the identification of dynamic networks. We evaluated the amount of information that exists in artificial time series microarray data and the ability of an inference process to produce accurate models based on them. We used dynamic artificial gene regulatory networks in order to create artificial microarray data. Key features that characterize microarray data such as the time separation of directly triggered genes, the percentage of directly triggered genes and the triggering function type were altered in order to reveal the limits that are imposed by the nature of microarray data on the inference process. We examined the effect of various factors on the inference performance such as the network size, the presence of noise in microarray data, and the network sparseness. We used a system theory approach and examined the relationship between the pole placement of the inferred system and the inference performance. We examined the relationship between the inference performance in the time domain and the true system parameter identification. Simulation results indicated that time separation and the percentage of directly triggered genes are crucial factors. Also, network sparseness, the triggering function type and noise in input data affect the inference performance. When two factors were simultaneously varied, it was found that variation of one parameter significantly affects the dynamic response of the other. Crucial factors were also examined using a real GRN and acquired results confirmed simulation findings with artificial data. Different initial conditions were also used as an alternative triggering approach. Relevant results confirmed that the number of datasets constitutes the most significant parameter with regard to the inference performance. Copyright © 2017 Elsevier

  19. Identification of potential biomarkers from microarray experiments using multiple criteria optimization

    International Nuclear Information System (INIS)

    Sánchez-Peña, Matilde L; Isaza, Clara E; Pérez-Morales, Jaileene; Rodríguez-Padilla, Cristina; Castro, José M; Cabrera-Ríos, Mauricio

    2013-01-01

    Microarray experiments are capable of determining the relative expression of tens of thousands of genes simultaneously, thus resulting in very large databases. The analysis of these databases and the extraction of biologically relevant knowledge from them are challenging tasks. The identification of potential cancer biomarker genes is one of the most important aims for microarray analysis and, as such, has been widely targeted in the literature. However, identifying a set of these genes consistently across different experiments, researches, microarray platforms, or cancer types is still an elusive endeavor. Besides the inherent difficulty of the large and nonconstant variability in these experiments and the incommensurability between different microarray technologies, there is the issue of the users having to adjust a series of parameters that significantly affect the outcome of the analyses and that do not have a biological or medical meaning. In this study, the identification of potential cancer biomarkers from microarray data is casted as a multiple criteria optimization (MCO) problem. The efficient solutions to this problem, found here through data envelopment analysis (DEA), are associated to genes that are proposed as potential cancer biomarkers. The method does not require any parameter adjustment by the user, and thus fosters repeatability. The approach also allows the analysis of different microarray experiments, microarray platforms, and cancer types simultaneously. The results include the analysis of three publicly available microarray databases related to cervix cancer. This study points to the feasibility of modeling the selection of potential cancer biomarkers from microarray data as an MCO problem and solve it using DEA. Using MCO entails a new optic to the identification of potential cancer biomarkers as it does not require the definition of a threshold value to establish significance for a particular gene and the selection of a normalization

  20. Microarray-based whole-genome hybridization as a tool for determining procaryotic species relatedness

    Energy Technology Data Exchange (ETDEWEB)

    Wu, L.; Liu, X.; Fields, M.W.; Thompson, D.K.; Bagwell, C.E.; Tiedje, J. M.; Hazen, T.C.; Zhou, J.

    2008-01-15

    The definition and delineation of microbial species are of great importance and challenge due to the extent of evolution and diversity. Whole-genome DNA-DNA hybridization is the cornerstone for defining procaryotic species relatedness, but obtaining pairwise DNA-DNA reassociation values for a comprehensive phylogenetic analysis of procaryotes is tedious and time consuming. A previously described microarray format containing whole-genomic DNA (the community genome array or CGA) was rigorously evaluated as a high-throughput alternative to the traditional DNA-DNA reassociation approach for delineating procaryotic species relationships. DNA similarities for multiple bacterial strains obtained with the CGA-based hybridization were comparable to those obtained with various traditional whole-genome hybridization methods (r=0.87, P<0.01). Significant linear relationships were also observed between the CGA-based genome similarities and those derived from small subunit (SSU) rRNA gene sequences (r=0.79, P<0.0001), gyrB sequences (r=0.95, P<0.0001) or REP- and BOX-PCR fingerprinting profiles (r=0.82, P<0.0001). The CGA hybridization-revealed species relationships in several representative genera, including Pseudomonas, Azoarcus and Shewanella, were largely congruent with previous classifications based on various conventional whole-genome DNA-DNA reassociation, SSU rRNA and/or gyrB analyses. These results suggest that CGA-based DNA-DNA hybridization could serve as a powerful, high-throughput format for determining species relatedness among microorganisms.

  1. Improving "At-Action" Decision-Making in Team Sports through a Holistic Coaching Approach

    Science.gov (United States)

    Light, Richard L.; Harvey, Stephen; Mouchet, Alain

    2014-01-01

    This article draws on Game Sense pedagogy and complex learning theory (CLT) to make suggestions for improving decision-making ability in team sports by adopting a holistic approach to coaching with a focus on decision-making "at-action". It emphasizes the complexity of decision-making and the need to focus on the game as a whole entity,…

  2. Reproducibility of gene expression across generations of Affymetrix microarrays

    Directory of Open Access Journals (Sweden)

    Haslett Judith N

    2003-06-01

    Full Text Available Abstract Background The development of large-scale gene expression profiling technologies is rapidly changing the norms of biological investigation. But the rapid pace of change itself presents challenges. Commercial microarrays are regularly modified to incorporate new genes and improved target sequences. Although the ability to compare datasets across generations is crucial for any long-term research project, to date no means to allow such comparisons have been developed. In this study the reproducibility of gene expression levels across two generations of Affymetrix GeneChips® (HuGeneFL and HG-U95A was measured. Results Correlation coefficients were computed for gene expression values across chip generations based on different measures of similarity. Comparing the absolute calls assigned to the individual probe sets across the generations found them to be largely unchanged. Conclusion We show that experimental replicates are highly reproducible, but that reproducibility across generations depends on the degree of similarity of the probe sets and the expression level of the corresponding transcript.

  3. Information integration in perceptual and value-based decisions

    OpenAIRE

    Tsetsos, K.

    2012-01-01

    Research on the psychology and neuroscience of simple, evidence-based choices has led to an impressive progress in capturing the underlying mental processes as optimal mechanisms that make the fastest decision for a specified accuracy. The idea that decision-making is an optimal process stands in contrast with findings in more complex, motivation-based decisions, focussed on multiple goals with trade-offs. Here, a number of paradoxical and puzzling choice behaviours have been r...

  4. The Use of Atomic Force Microscopy for 3D Analysis of Nucleic Acid Hybridization on Microarrays.

    Science.gov (United States)

    Dubrovin, E V; Presnova, G V; Rubtsova, M Yu; Egorov, A M; Grigorenko, V G; Yaminsky, I V

    2015-01-01

    Oligonucleotide microarrays are considered today to be one of the most efficient methods of gene diagnostics. The capability of atomic force microscopy (AFM) to characterize the three-dimensional morphology of single molecules on a surface allows one to use it as an effective tool for the 3D analysis of a microarray for the detection of nucleic acids. The high resolution of AFM offers ways to decrease the detection threshold of target DNA and increase the signal-to-noise ratio. In this work, we suggest an approach to the evaluation of the results of hybridization of gold nanoparticle-labeled nucleic acids on silicon microarrays based on an AFM analysis of the surface both in air and in liquid which takes into account of their three-dimensional structure. We suggest a quantitative measure of the hybridization results which is based on the fraction of the surface area occupied by the nanoparticles.

  5. Frame-based safety analysis approach for decision-based errors

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Yihb, Swu

    1997-01-01

    A frame-based approach is proposed to analyze decision-based errors made by automatic controllers or human operators due to erroneous reference frames. An integrated framework, Two Frame Model (TFM), is first proposed to model the dynamic interaction between the physical process and the decision-making process. Two important issues, consistency and competing processes, are raised. Consistency between the physical and logic frames makes a TFM-based system work properly. Loss of consistency refers to the failure mode that the logic frame does not accurately reflect the state of the controlled processes. Once such failure occurs, hazards may arise. Among potential hazards, the competing effect between the controller and the controlled process is the most severe one, which may jeopardize a defense-in-depth design. When the logic and physical frames are inconsistent, conventional safety analysis techniques are inadequate. We propose Frame-based Fault Tree; Analysis (FFTA) and Frame-based Event Tree Analysis (FETA) under TFM to deduce the context for decision errors and to separately generate the evolution of the logical frame as opposed to that of the physical frame. This multi-dimensional analysis approach, different from the conventional correctness-centred approach, provides a panoramic view in scenario generation. Case studies using the proposed techniques are also given to demonstrate their usage and feasibility

  6. Versatile High Resolution Oligosaccharide Microarrays for Plant Glycobiology and Cell Wall Research

    DEFF Research Database (Denmark)

    Pedersen, Henriette Lodberg; Fangel, Jonatan Ulrik; McCleary, Barry

    2012-01-01

    Microarrays are powerful tools for high throughput analysis, and hundreds or thousands of molecular interactions can be assessed simultaneously using very small amounts of analytes. Nucleotide microarrays are well established in plant research, but carbohydrate microarrays are much less establish...

  7. MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering

    Directory of Open Access Journals (Sweden)

    Ashlock Daniel

    2009-08-01

    Full Text Available Abstract Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors.

  8. MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering.

    Science.gov (United States)

    Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu

    2009-08-22

    Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors.

  9. Tissue microarrays for testing basal biomarkers in familial breast cancer cases

    Directory of Open Access Journals (Sweden)

    Rozany Mucha Dufloth

    Full Text Available CONTEXT AND OBJECTIVE: The proteins p63, p-cadherin and CK5 are consistently expressed by the basal and myoepithelial cells of the breast, although their expression in sporadic and familial breast cancer cases has yet to be fully defined. The aim here was to study the basal immunopro-file of a breast cancer case series using tissue microarray technology. DESIGN AND SETTING: This was a cross-sectional study at Universidade Estadual de Campinas, Brazil, and the Institute of Pathology and Mo-lecular Immunology, Porto, Portugal. METHODS: Immunohistochemistry using the antibodies p63, CK5 and p-cadherin, and also estrogen receptor (ER and Human Epidermal Receptor Growth Factor 2 (HER2, was per-formed on 168 samples from a breast cancer case series. The criteria for identifying women at high risk were based on those of the Breast Cancer Linkage Consortium. RESULTS: Familial tumors were more frequently positive for the p-cadherin (p = 0.0004, p63 (p < 0.0001 and CK5 (p < 0.0001 than was sporadic cancer. Moreover, familial tumors had coexpression of the basal biomarkers CK5+/ p63+, grouped two by two (OR = 34.34, while absence of coexpression (OR = 0.13 was associ-ated with the sporadic cancer phenotype. CONCLUSION: Familial breast cancer was found to be associated with basal biomarkers, using tissue microarray technology. Therefore, characterization of the familial breast cancer phenotype will improve the understanding of breast carcinogenesis.

  10. Focused Screening of ECM-Selective Adhesion Peptides on Cellulose-Bound Peptide Microarrays.

    Science.gov (United States)

    Kanie, Kei; Kondo, Yuto; Owaki, Junki; Ikeda, Yurika; Narita, Yuji; Kato, Ryuji; Honda, Hiroyuki

    2016-11-19

    The coating of surfaces with bio-functional proteins is a promising strategy for the creation of highly biocompatible medical implants. Bio-functional proteins from the extracellular matrix (ECM) provide effective surface functions for controlling cellular behavior. We have previously screened bio-functional tripeptides for feasibility of mass production with the aim of identifying those that are medically useful, such as cell-selective peptides. In this work, we focused on the screening of tripeptides that selectively accumulate collagen type IV (Col IV), an ECM protein that accelerates the re-endothelialization of medical implants. A SPOT peptide microarray was selected for screening owing to its unique cellulose membrane platform, which can mimic fibrous scaffolds used in regenerative medicine. However, since the library size on the SPOT microarray was limited, physicochemical clustering was used to provide broader variation than that of random peptide selection. Using the custom focused microarray of 500 selected peptides, we assayed the relative binding rates of tripeptides to Col IV, collagen type I (Col I), and albumin. We discovered a cluster of Col IV-selective adhesion peptides that exhibit bio-safety with endothelial cells. The results from this study can be used to improve the screening of regeneration-enhancing peptides.

  11. Focused Screening of ECM-Selective Adhesion Peptides on Cellulose-Bound Peptide Microarrays

    Directory of Open Access Journals (Sweden)

    Kei Kanie

    2016-11-01

    Full Text Available The coating of surfaces with bio-functional proteins is a promising strategy for the creation of highly biocompatible medical implants. Bio-functional proteins from the extracellular matrix (ECM provide effective surface functions for controlling cellular behavior. We have previously screened bio-functional tripeptides for feasibility of mass production with the aim of identifying those that are medically useful, such as cell-selective peptides. In this work, we focused on the screening of tripeptides that selectively accumulate collagen type IV (Col IV, an ECM protein that accelerates the re-endothelialization of medical implants. A SPOT peptide microarray was selected for screening owing to its unique cellulose membrane platform, which can mimic fibrous scaffolds used in regenerative medicine. However, since the library size on the SPOT microarray was limited, physicochemical clustering was used to provide broader variation than that of random peptide selection. Using the custom focused microarray of 500 selected peptides, we assayed the relative binding rates of tripeptides to Col IV, collagen type I (Col I, and albumin. We discovered a cluster of Col IV-selective adhesion peptides that exhibit bio-safety with endothelial cells. The results from this study can be used to improve the screening of regeneration-enhancing peptides.

  12. SOCOM Training and Rehearsal System (STRS) Process Improvement and Decision Support System (DSS) Development

    National Research Council Canada - National Science Library

    Crossland, Neal; Broussard, Steve

    2005-01-01

    ...) Process Improvement and Decision Support System (DSS) Development. Discussion sequence is: Why the study? Objectives; Areas of inquiry; Study products; Observations; Recommendations; Decision Support System.

  13. ArrayPitope: Automated Analysis of Amino Acid Substitutions for Peptide Microarray-Based Antibody Epitope Mapping

    DEFF Research Database (Denmark)

    Hansen, Christian Skjødt; Østerbye, Thomas; Marcatili, Paolo

    2017-01-01

    -reactivity. B cell epitopes are typically classified as either linear epitopes, i.e. short consecutive segments from the protein sequence or conformational epitopes adapted through native protein folding. Recent advances in high-density peptide microarrays enable high-throughput, high-resolution identification...

  14. Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers

    NARCIS (Netherlands)

    Trevena, L.J.; Zikmund-Fisher, B.J.; Edwards, A.; Gaissmaier, W.; Galesic, M.; Han, P.K.J.; King, J.; Lawson, M.L.; Linder, S.K.; Lipkus, I.; Ozanne, E.; Peters, E.; Timmermans, D.R.M.; Woloshin, S.

    2013-01-01

    Background: Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients' risk perception and leads to better informed decision

  15. Design issues in toxicogenomics using DNA microarray experiment

    International Nuclear Information System (INIS)

    Lee, Kyoung-Mu; Kim, Ju-Han; Kang, Daehee

    2005-01-01

    The methods of toxicogenomics might be classified into omics study (e.g., genomics, proteomics, and metabolomics) and population study focusing on risk assessment and gene-environment interaction. In omics study, microarray is the most popular approach. Genes falling into several categories (e.g., xenobiotics metabolism, cell cycle control, DNA repair etc.) can be selected up to 20,000 according to a priori hypothesis. The appropriate type of samples and species should be selected in advance. Multiple doses and varied exposure durations are suggested to identify those genes clearly linked to toxic response. Microarray experiments can be affected by numerous nuisance variables including experimental designs, sample extraction, type of scanners, etc. The number of slides might be determined from the magnitude and variance of expression change, false-positive rate, and desired power. Instead, pooling samples is an alternative. Online databases on chemicals with known exposure-disease outcomes and genetic information can aid the interpretation of the normalized results. Gene function can be inferred from microarray data analyzed by bioinformatics methods such as cluster analysis. The population study often adopts hospital-based or nested case-control design. Biases in subject selection and exposure assessment should be minimized, and confounding bias should also be controlled for in stratified or multiple regression analysis. Optimal sample sizes are dependent on the statistical test for gene-to-environment or gene-to-gene interaction. The design issues addressed in this mini-review are crucial in conducting toxicogenomics study. In addition, integrative approach of exposure assessment, epidemiology, and clinical trial is required

  16. Xylella fastidiosa gene expression analysis by DNA microarrays

    OpenAIRE

    Travensolo,Regiane F.; Carareto-Alves,Lucia M.; Costa,Maria V.C.G.; Lopes,Tiago J.S.; Carrilho,Emanuel; Lemos,Eliana G.M.

    2009-01-01

    Xylella fastidiosa genome sequencing has generated valuable data by identifying genes acting either on metabolic pathways or in associated pathogenicity and virulence. Based on available information on these genes, new strategies for studying their expression patterns, such as microarray technology, were employed. A total of 2,600 primer pairs were synthesized and then used to generate fragments using the PCR technique. The arrays were hybridized against cDNAs labeled during reverse transcrip...

  17. Decision algorithms in fire detection systems

    Directory of Open Access Journals (Sweden)

    Ristić Jovan D.

    2011-01-01

    Full Text Available Analogue (and addressable fire detection systems enables a new quality in improving sensitivity to real fires and reducing susceptibility to nuisance alarm sources. Different decision algorithms types were developed with intention to improve sensitivity and reduce false alarm occurrence. At the beginning, it was free alarm level adjustment based on preset level. Majority of multi-criteria decision work was based on multi-sensor (multi-signature decision algorithms - using different type of sensors on the same location or, rather, using different aspects (level and rise of one sensor measured value. Our idea is to improve sensitivity and reduce false alarm occurrence by forming groups of sensors that work in similar conditions (same world side in the building, same or similar technology or working time. Original multi-criteria decision algorithms based on level, rise and difference of level and rise from group average are discussed in this paper.

  18. A novel computer based expert decision making model for prostate cancer disease management.

    Science.gov (United States)

    Richman, Martin B; Forman, Ernest H; Bayazit, Yildirim; Einstein, Douglas B; Resnick, Martin I; Stovsky, Mark D

    2005-12-01

    We propose a strategic, computer based, prostate cancer decision making model based on the analytic hierarchy process. We developed a model that improves physician-patient joint decision making and enhances the treatment selection process by making this critical decision rational and evidence based. Two groups (patient and physician-expert) completed a clinical study comparing an initial disease management choice with the highest ranked option generated by the computer model. Participants made pairwise comparisons to derive priorities for the objectives and subobjectives related to the disease management decision. The weighted comparisons were then applied to treatment options to yield prioritized rank lists that reflect the likelihood that a given alternative will achieve the participant treatment goal. Aggregate data were evaluated by inconsistency ratio analysis and sensitivity analysis, which assessed the influence of individual objectives and subobjectives on the final rank list of treatment options. Inconsistency ratios less than 0.05 were reliably generated, indicating that judgments made within the model were mathematically rational. The aggregate prioritized list of treatment options was tabulated for the patient and physician groups with similar outcomes for the 2 groups. Analysis of the major defining objectives in the treatment selection decision demonstrated the same rank order for the patient and physician groups with cure, survival and quality of life being more important than controlling cancer, preventing major complications of treatment, preventing blood transfusion complications and limiting treatment cost. Analysis of subobjectives, including quality of life and sexual dysfunction, produced similar priority rankings for the patient and physician groups. Concordance between initial treatment choice and the highest weighted model option differed between the groups with the patient group having 59% concordance and the physician group having only 42

  19. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    黄承志; 李原芳; 黄新华; 范美坤

    2000-01-01

    The microarray of DNA probes with 5’ -NH2 and 5’ -Tex/3’ -NH2 modified terminus on 10 um carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC) is characterized in the preseni paper. it was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentra-tion of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  20. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The microarray of DNA probes with 5′-NH2 and 5′-Tex/3′-NH2 modified terminus on 10 m m carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)- carbodiimide (EDC) is characterized in the present paper. It was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentration of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  1. permGPU: Using graphics processing units in RNA microarray association studies

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2010-06-01

    Full Text Available Abstract Background Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. Results We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. Conclusions permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  2. Microarray Dot Electrodes Utilizing Dielectrophoresis for Cell Characterization

    Directory of Open Access Journals (Sweden)

    Fatimah Ibrahim

    2013-07-01

    Full Text Available During the last three decades; dielectrophoresis (DEP has become a vital tool for cell manipulation and characterization due to its non-invasiveness. It is very useful in the trend towards point-of-care systems. Currently, most efforts are focused on using DEP in biomedical applications, such as the spatial manipulation of cells, the selective separation or enrichment of target cells, high-throughput molecular screening, biosensors and immunoassays. A significant amount of research on DEP has produced a wide range of microelectrode configurations. In this paper; we describe the microarray dot electrode, a promising electrode geometry to characterize and manipulate cells via DEP. The advantages offered by this type of microelectrode are also reviewed. The protocol for fabricating planar microelectrodes using photolithography is documented to demonstrate the fast and cost-effective fabrication process. Additionally; different state-of-the-art Lab-on-a-Chip (LOC devices that have been proposed for DEP applications in the literature are reviewed. We also present our recently designed LOC device, which uses an improved microarray dot electrode configuration to address the challenges facing other devices. This type of LOC system has the capability to boost the implementation of DEP technology in practical settings such as clinical cell sorting, infection diagnosis, and enrichment of particle populations for drug development.

  3. Improving the Slum Planning Through Geospatial Decision Support System

    Science.gov (United States)

    Shekhar, S.

    2014-11-01

    In India, a number of schemes and programmes have been launched from time to time in order to promote integrated city development and to enable the slum dwellers to gain access to the basic services. Despite the use of geospatial technologies in planning, the local, state and central governments have only been partially successful in dealing with these problems. The study on existing policies and programmes also proved that when the government is the sole provider or mediator, GIS can become a tool of coercion rather than participatory decision-making. It has also been observed that local level administrators who have adopted Geospatial technology for local planning continue to base decision-making on existing political processes. In this juncture, geospatial decision support system (GSDSS) can provide a framework for integrating database management systems with analytical models, graphical display, tabular reporting capabilities and the expert knowledge of decision makers. This assists decision-makers to generate and evaluate alternative solutions to spatial problems. During this process, decision-makers undertake a process of decision research - producing a large number of possible decision alternatives and provide opportunities to involve the community in decision making. The objective is to help decision makers and planners to find solutions through a quantitative spatial evaluation and verification process. The study investigates the options for slum development in a formal framework of RAY (Rajiv Awas Yojana), an ambitious program of Indian Government for slum development. The software modules for realizing the GSDSS were developed using the ArcGIS and Community -VIZ software for Gulbarga city.

  4. Identifying significant genetic regulatory networks in the prostate cancer from microarray data based on transcription factor analysis and conditional independency

    Directory of Open Access Journals (Sweden)

    Yeh Cheng-Yu

    2009-12-01

    . Conclusions We provide a computational framework to reconstruct the genetic regulatory network from the microarray data using biological knowledge and constraint-based inferences. Our method is helpful in verifying possible interaction relations in gene regulatory networks and filtering out incorrect relations inferred by imperfect methods. We predicted not only individual gene related to cancer but also discovered significant gene regulation networks. Our method is also validated in several enriched published papers and databases and the significant gene regulatory networks perform critical biological functions and processes including cell adhesion molecules, androgen and estrogen metabolism, smooth muscle contraction, and GO-annotated processes. Those significant gene regulations and the critical concept of tumor progression are useful to understand cancer biology and disease treatment.

  5. Transcriptomic identification of candidate genes involved in sunflower responses to chilling and salt stresses based on cDNA microarray analysis

    Directory of Open Access Journals (Sweden)

    Paniego Norma

    2008-01-01

    Full Text Available Abstract Background Considering that sunflower production is expanding to arid regions, tolerance to abiotic stresses as drought, low temperatures and salinity arises as one of the main constrains nowadays. Differential organ-specific sunflower ESTs (expressed sequence tags were previously generated by a subtractive hybridization method that included a considerable number of putative abiotic stress associated sequences. The objective of this work is to analyze concerted gene expression profiles of organ-specific ESTs by fluorescence microarray assay, in response to high sodium chloride concentration and chilling treatments with the aim to identify and follow up candidate genes for early responses to abiotic stress in sunflower. Results Abiotic-related expressed genes were the target of this characterization through a gene expression analysis using an organ-specific cDNA fluorescence microarray approach in response to high salinity and low temperatures. The experiment included three independent replicates from leaf samples. We analyzed 317 unigenes previously isolated from differential organ-specific cDNA libraries from leaf, stem and flower at R1 and R4 developmental stage. A statistical analysis based on mean comparison by ANOVA and ordination by Principal Component Analysis allowed the detection of 80 candidate genes for either salinity and/or chilling stresses. Out of them, 50 genes were up or down regulated under both stresses, supporting common regulatory mechanisms and general responses to chilling and salinity. Interestingly 15 and 12 sequences were up regulated or down regulated specifically in one stress but not in the other, respectively. These genes are potentially involved in different regulatory mechanisms including transcription/translation/protein degradation/protein folding/ROS production or ROS-scavenging. Differential gene expression patterns were confirmed by qRT-PCR for 12.5% of the microarray candidate sequences. Conclusion

  6. Identifying significant genetic regulatory networks in the prostate cancer from microarray data based on transcription factor analysis and conditional independency.

    Science.gov (United States)

    Yeh, Hsiang-Yuan; Cheng, Shih-Wu; Lin, Yu-Chun; Yeh, Cheng-Yu; Lin, Shih-Fang; Soo, Von-Wun

    2009-12-21

    the genetic regulatory network from the microarray data using biological knowledge and constraint-based inferences. Our method is helpful in verifying possible interaction relations in gene regulatory networks and filtering out incorrect relations inferred by imperfect methods. We predicted not only individual gene related to cancer but also discovered significant gene regulation networks. Our method is also validated in several enriched published papers and databases and the significant gene regulatory networks perform critical biological functions and processes including cell adhesion molecules, androgen and estrogen metabolism, smooth muscle contraction, and GO-annotated processes. Those significant gene regulations and the critical concept of tumor progression are useful to understand cancer biology and disease treatment.

  7. Key stakeholders' perceptions of the acceptability and usefulness of a tablet-based tool to improve communication and shared decision making in ICUs.

    Science.gov (United States)

    Ernecoff, Natalie C; Witteman, Holly O; Chon, Kristen; Chen, Yanquan Iris; Buddadhumaruk, Praewpannarai; Chiarchiaro, Jared; Shotsberger, Kaitlin J; Shields, Anne-Marie; Myers, Brad A; Hough, Catherine L; Carson, Shannon S; Lo, Bernard; Matthay, Michael A; Anderson, Wendy G; Peterson, Michael W; Steingrub, Jay S; Arnold, Robert M; White, Douglas B

    2016-06-01

    broad support among stakeholders for the use of a tablet-based tool to improve communication and shared decision making in ICUs. Eliciting the perspectives of key stakeholders early in the design process yielded important insights to create a tool tailored to the needs of surrogates and care providers in ICUs. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Patient Decision Aids Improve Decision Quality and Patient Experience and Reduce Surgical Rates in Routine Orthopaedic Care: A Prospective Cohort Study.

    Science.gov (United States)

    Sepucha, Karen; Atlas, Steven J; Chang, Yuchiao; Dorrwachter, Janet; Freiberg, Andrew; Mangla, Mahima; Rubash, Harry E; Simmons, Leigh H; Cha, Thomas

    2017-08-02

    Patient decision aids are effective in randomized controlled trials, yet little is known about their impact in routine care. The purpose of this study was to examine whether decision aids increase shared decision-making when used in routine care. A prospective study was designed to evaluate the impact of a quality improvement project to increase the use of decision aids for patients with hip or knee osteoarthritis, lumbar disc herniation, or lumbar spinal stenosis. A usual care cohort was enrolled before the quality improvement project and an intervention cohort was enrolled after the project. Participants were surveyed 1 week after a specialist visit, and surgical status was collected at 6 months. Regression analyses adjusted for clustering of patients within clinicians and examined the impact on knowledge, patient reports of shared decision-making in the visit, and surgical rates. With 550 surveys, the study had 80% to 90% power to detect a difference in these key outcomes. The response rates to the 1-week survey were 70.6% (324 of 459) for the usual care cohort and 70.2% (328 of 467) for the intervention cohort. There was no significant difference (p > 0.05) in any patient characteristic between the 2 cohorts. More patients received decision aids in the intervention cohort at 63.6% compared with the usual care cohort at 27.3% (p = 0.007). Decision aid use was associated with higher knowledge scores, with a mean difference of 18.7 points (95% confidence interval [CI], 11.4 to 26.1 points; p < 0.001) for the usual care cohort and 15.3 points (95% CI, 7.5 to 23.0 points; p = 0.002) for the intervention cohort. Patients reported more shared decision-making (p = 0.009) in the visit with their surgeon in the intervention cohort, with a mean Shared Decision-Making Process score (and standard deviation) of 66.9 ± 27.5 points, compared with the usual care cohort at 62.5 ± 28.6 points. The majority of patients received their preferred treatment, and this did not differ

  9. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    Science.gov (United States)

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  10. Causal knowledge and reasoning in decision making

    NARCIS (Netherlands)

    Hagmayer, Y.; Witteman, C.L.M.

    2017-01-01

    Normative causal decision theories argue that people should use their causal knowledge in decision making. Based on these ideas, we argue that causal knowledge and reasoning may support and thereby potentially improve decision making based on expected outcomes, narratives, and even cues. We will

  11. Seminal Quality Prediction Using Clustering-Based Decision Forests

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-08-01

    Full Text Available Prediction of seminal quality with statistical learning tools is an emerging methodology in decision support systems in biomedical engineering and is very useful in early diagnosis of seminal patients and selection of semen donors candidates. However, as is common in medical diagnosis, seminal quality prediction faces the class imbalance problem. In this paper, we propose a novel supervised ensemble learning approach, namely Clustering-Based Decision Forests, to tackle unbalanced class learning problem in seminal quality prediction. Experiment results on real fertility diagnosis dataset have shown that Clustering-Based Decision Forests outperforms decision tree, Support Vector Machines, random forests, multilayer perceptron neural networks and logistic regression by a noticeable margin. Clustering-Based Decision Forests can also be used to evaluate variables’ importance and the top five important factors that may affect semen concentration obtained in this study are age, serious trauma, sitting time, the season when the semen sample is produced, and high fevers in the last year. The findings could be helpful in explaining seminal concentration problems in infertile males or pre-screening semen donor candidates.

  12. Improving Decision Making for Feeding Options in Advanced Dementia: A Randomized, Controlled Trial

    Science.gov (United States)

    Hanson, Laura C.; Carey, Timothy S.; Caprio, Anthony J.; Lee, Tae Joon; Ersek, Mary; Garrett, Joanne; Jackman, Anne; Gilliam, Robin; Wessell, Kathryn; Mitchell, Susan L.

    2011-01-01

    Background Feeding problems are common in dementia, and decision-makers have limited understanding of treatment options. Objectives To test whether a decision aid improves quality of decision-making about feeding options in advanced dementia. Design Cluster randomized controlled trial. Setting 24 nursing homes in North Carolina Participants Residents with advanced dementia and feeding problems and their surrogates. Intervention Intervention surrogates received an audio or print decision aid on feeding options in advanced dementia. Controls received usual care. Measurements Primary outcome was the Decisional Conflict Scale (range 1–5) measured at 3 months; other main outcomes were surrogate knowledge, frequency of communication with providers, and feeding treatment use. Results 256 residents and surrogate decision-makers were recruited. Residents’ average age was 85; 67% were Caucasian and 79% were women. Surrogates’ average age was 59; 67% were Caucasian, and 70% were residents’ children. The intervention improved knowledge scores (16.8 vs 15.1, paid about feeding options in advanced dementia reduced decisional conflict for surrogates and increased their knowledge and communication about feeding options with providers. PMID:22091750

  13. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  14. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    Science.gov (United States)

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  15. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    Directory of Open Access Journals (Sweden)

    Karel Doubravsky

    Full Text Available Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (rechecked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  16. Feature Genes Selection Using Supervised Locally Linear Embedding and Correlation Coefficient for Microarray Classification.

    Science.gov (United States)

    Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou

    2018-01-01

    The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC 2 ), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible.

  17. Continuous quality improvement for the clinical decision unit.

    Science.gov (United States)

    Mace, Sharon E

    2004-01-01

    Clinical decision units (CDUs) are a relatively new and growing area of medicine in which patients undergo rapid evaluation and treatment. Continuous quality improvement (CQI) is important for the establishment and functioning of CDUs. CQI in CDUs has many advantages: better CDU functioning, fulfillment of Joint Commission on Accreditation of Healthcare Organizations mandates, greater efficiency/productivity, increased job satisfaction, better performance improvement, data availability, and benchmarking. Key elements include a database with volume indicators, operational policies, clinical practice protocols (diagnosis specific/condition specific), monitors, benchmarks, and clinical pathways. Examples of these important parameters are given. The CQI process should be individualized for each CDU and hospital.

  18. ICT-based reforms in local government decision-making in the gram panchayats of Kerala

    Directory of Open Access Journals (Sweden)

    Alex K Thottunkel

    2015-06-01

    Full Text Available The beneficial impact of computerisation can be felt in all elements that contribute to decision-making in panchayats in the state of Kerala. However, even though computerisation is bringing about immense improvements compared to traditional administrative practices, but scope still remains for further improvement. Instead of the 'as it is' computerisation that is mostly carried out a process based approach is needed.

  19. Design of a covalently bonded glycosphingolipid microarray

    DEFF Research Database (Denmark)

    Arigi, Emma; Blixt, Klas Ola; Buschard, Karsten

    2012-01-01

    , the major classes of plant and fungal GSLs. In this work, a prototype "universal" GSL-based covalent microarray has been designed, and preliminary evaluation of its potential utility in assaying protein-GSL binding interactions investigated. An essential step in development involved the enzymatic release...... of the fatty acyl moiety of the ceramide aglycone of selected mammalian GSLs with sphingolipid N-deacylase (SCDase). Derivatization of the free amino group of a typical lyso-GSL, lyso-G(M1), with a prototype linker assembled from succinimidyl-[(N-maleimidopropionamido)-diethyleneglycol] ester and 2...

  20. Does Patient Preference Measurement in Decision Aids Improve Decisional Conflict? A Randomized Trial in Men with Prostate Cancer.

    Science.gov (United States)

    Shirk, Joseph D; Crespi, Catherine M; Saucedo, Josemanuel D; Lambrechts, Sylvia; Dahan, Ely; Kaplan, Robert; Saigal, Christopher

    2017-12-01

    Shared decision making (SDM) has been advocated as an approach to medical decision making that can improve decisional quality. Decision aids are tools that facilitate SDM in the context of limited physician time; however, many decision aids do not incorporate preference measurement. We aim to understand whether adding preference measurement to a standard patient educational intervention improves decisional quality and is feasible in a busy clinical setting. Men with incident localized prostate cancer (n = 122) were recruited from the Greater Los Angeles Veterans Affairs (VA) Medical Center urology clinic, Olive View UCLA Medical Center, and Harbor UCLA Medical Center from January 2011 to May 2015 and randomized to education with a brochure about prostate cancer treatment or software-based preference assessment in addition to the brochure. Men undergoing preference assessment received a report detailing the relative strength of their preferences for treatment outcomes used in review with their doctor. Participants completed instruments measuring decisional conflict, knowledge, SDM, and patient satisfaction with care before and/or after their cancer consultation. Baseline knowledge scores were low (mean 62%). The baseline mean total score on the Decisional Conflict Scale was 2.3 (±0.9), signifying moderate decisional conflict. Men undergoing preference assessment had a significantly larger decrease in decisional conflict total score (p = 0.023) and the Perceived Effective Decision Making subscale (p = 0.003) post consult compared with those receiving education only. Improvements in satisfaction with care, SDM, and knowledge were similar between groups. Individual-level preference assessment is feasible in the clinic setting. Patients with prostate cancer who undergo preference assessment are more certain about their treatment decisions and report decreased levels of decisional conflict when making these decisions.

  1. Multi-gene detection and identification of mosquito-borne RNA viruses using an oligonucleotide microarray.

    Directory of Open Access Journals (Sweden)

    Nathan D Grubaugh

    Full Text Available BACKGROUND: Arthropod-borne viruses are important emerging pathogens world-wide. Viruses transmitted by mosquitoes, such as dengue, yellow fever, and Japanese encephalitis viruses, infect hundreds of millions of people and animals each year. Global surveillance of these viruses in mosquito vectors using molecular based assays is critical for prevention and control of the associated diseases. Here, we report an oligonucleotide DNA microarray design, termed ArboChip5.1, for multi-gene detection and identification of mosquito-borne RNA viruses from the genera Flavivirus (family Flaviviridae, Alphavirus (Togaviridae, Orthobunyavirus (Bunyaviridae, and Phlebovirus (Bunyaviridae. METHODOLOGY/PRINCIPAL FINDINGS: The assay utilizes targeted PCR amplification of three genes from each virus genus for electrochemical detection on a portable, field-tested microarray platform. Fifty-two viruses propagated in cell-culture were used to evaluate the specificity of the PCR primer sets and the ArboChip5.1 microarray capture probes. The microarray detected all of the tested viruses and differentiated between many closely related viruses such as members of the dengue, Japanese encephalitis, and Semliki Forest virus clades. Laboratory infected mosquitoes were used to simulate field samples and to determine the limits of detection. Additionally, we identified dengue virus type 3, Japanese encephalitis virus, Tembusu virus, Culex flavivirus, and a Quang Binh-like virus from mosquitoes collected in Thailand in 2011 and 2012. CONCLUSIONS/SIGNIFICANCE: We demonstrated that the described assay can be utilized in a comprehensive field surveillance program by the broad-range amplification and specific identification of arboviruses from infected mosquitoes. Furthermore, the microarray platform can be deployed in the field and viral RNA extraction to data analysis can occur in as little as 12 h. The information derived from the ArboChip5.1 microarray can help to establish

  2. Factorial microarray analysis of zebra mussel (Dreissena polymorpha: Dreissenidae, Bivalvia adhesion

    Directory of Open Access Journals (Sweden)

    Faisal Mohamed

    2010-05-01

    Full Text Available Abstract Background The zebra mussel (Dreissena polymorpha has been well known for its expertise in attaching to substances under the water. Studies in past decades on this underwater adhesion focused on the adhesive protein isolated from the byssogenesis apparatus of the zebra mussel. However, the mechanism of the initiation, maintenance, and determination of the attachment process remains largely unknown. Results In this study, we used a zebra mussel cDNA microarray previously developed in our lab and a factorial analysis to identify the genes that were involved in response to the changes of four factors: temperature (Factor A, current velocity (Factor B, dissolved oxygen (Factor C, and byssogenesis status (Factor D. Twenty probes in the microarray were found to be modified by one of the factors. The transcription products of four selected genes, DPFP-BG20_A01, EGP-BG97/192_B06, EGP-BG13_G05, and NH-BG17_C09 were unique to the zebra mussel foot based on the results of quantitative reverse transcription PCR (qRT-PCR. The expression profiles of these four genes under the attachment and non-attachment were also confirmed by qRT-PCR and the result is accordant to that from microarray assay. The in situ hybridization with the RNA probes of two identified genes DPFP-BG20_A01 and EGP-BG97/192_B06 indicated that both of them were expressed by a type of exocrine gland cell located in the middle part of the zebra mussel foot. Conclusions The results of this study suggested that the changes of D. polymorpha byssogenesis status and the environmental factors can dramatically affect the expression profiles of the genes unique to the foot. It turns out that the factorial design and analysis of the microarray experiment is a reliable method to identify the influence of multiple factors on the expression profiles of the probesets in the microarray; therein it provides a powerful tool to reveal the mechanism of zebra mussel underwater attachment.

  3. Factorial microarray analysis of zebra mussel (Dreissena polymorpha: Dreissenidae, Bivalvia) adhesion.

    Science.gov (United States)

    Xu, Wei; Faisal, Mohamed

    2010-05-28

    The zebra mussel (Dreissena polymorpha) has been well known for its expertise in attaching to substances under the water. Studies in past decades on this underwater adhesion focused on the adhesive protein isolated from the byssogenesis apparatus of the zebra mussel. However, the mechanism of the initiation, maintenance, and determination of the attachment process remains largely unknown. In this study, we used a zebra mussel cDNA microarray previously developed in our lab and a factorial analysis to identify the genes that were involved in response to the changes of four factors: temperature (Factor A), current velocity (Factor B), dissolved oxygen (Factor C), and byssogenesis status (Factor D). Twenty probes in the microarray were found to be modified by one of the factors. The transcription products of four selected genes, DPFP-BG20_A01, EGP-BG97/192_B06, EGP-BG13_G05, and NH-BG17_C09 were unique to the zebra mussel foot based on the results of quantitative reverse transcription PCR (qRT-PCR). The expression profiles of these four genes under the attachment and non-attachment were also confirmed by qRT-PCR and the result is accordant to that from microarray assay. The in situ hybridization with the RNA probes of two identified genes DPFP-BG20_A01 and EGP-BG97/192_B06 indicated that both of them were expressed by a type of exocrine gland cell located in the middle part of the zebra mussel foot. The results of this study suggested that the changes of D. polymorpha byssogenesis status and the environmental factors can dramatically affect the expression profiles of the genes unique to the foot. It turns out that the factorial design and analysis of the microarray experiment is a reliable method to identify the influence of multiple factors on the expression profiles of the probesets in the microarray; therein it provides a powerful tool to reveal the mechanism of zebra mussel underwater attachment.

  4. 3D Biomaterial Microarrays for Regenerative Medicine

    DEFF Research Database (Denmark)

    Gaharwar, Akhilesh K.; Arpanaei, Ayyoob; Andresen, Thomas Lars

    2015-01-01

    Three dimensional (3D) biomaterial microarrays hold enormous promise for regenerative medicine because of their ability to accelerate the design and fabrication of biomimetic materials. Such tissue-like biomaterials can provide an appropriate microenvironment for stimulating and controlling stem...... for tissue engineering and drug screening applications....... cell differentiation into tissue-specifi c lineages. The use of 3D biomaterial microarrays can, if optimized correctly, result in a more than 1000-fold reduction in biomaterials and cells consumption when engineering optimal materials combinations, which makes these miniaturized systems very attractive...

  5. Systematic spatial bias in DNA microarray hybridization is caused by probe spot position-dependent variability in lateral diffusion.

    Science.gov (United States)

    Steger, Doris; Berry, David; Haider, Susanne; Horn, Matthias; Wagner, Michael; Stocker, Roman; Loy, Alexander

    2011-01-01

    The hybridization of nucleic acid targets with surface-immobilized probes is a widely used assay for the parallel detection of multiple targets in medical and biological research. Despite its widespread application, DNA microarray technology still suffers from several biases and lack of reproducibility, stemming in part from an incomplete understanding of the processes governing surface hybridization. In particular, non-random spatial variations within individual microarray hybridizations are often observed, but the mechanisms underpinning this positional bias remain incompletely explained. This study identifies and rationalizes a systematic spatial bias in the intensity of surface hybridization, characterized by markedly increased signal intensity of spots located at the boundaries of the spotted areas of the microarray slide. Combining observations from a simplified single-probe block array format with predictions from a mathematical model, the mechanism responsible for this bias is found to be a position-dependent variation in lateral diffusion of target molecules. Numerical simulations reveal a strong influence of microarray well geometry on the spatial bias. Reciprocal adjustment of the size of the microarray hybridization chamber to the area of surface-bound probes is a simple and effective measure to minimize or eliminate the diffusion-based bias, resulting in increased uniformity and accuracy of quantitative DNA microarray hybridization.

  6. Optimization of Aeroengine Shop Visit Decisions Based on Remaining Useful Life and Stochastic Repair Time

    Directory of Open Access Journals (Sweden)

    Jing Cai

    2016-01-01

    Full Text Available Considering the wide application of condition-based maintenance in aeroengine maintenance practice, it becomes possible for aeroengines to carry out their preventive maintenance in just-in-time (JIT manner by reasonably planning their shop visits (SVs. In this study, an approach is proposed to make aeroengine SV decisions following the concept of JIT. Firstly, a state space model (SSM for aeroengine based on exhaust gas temperature margin is developed to predict the remaining useful life (RUL of aeroengine. Secondly, the effect of SV decisions on risk and service level (SL is analyzed, and an optimization of the aeroengine SV decisions based on RUL and stochastic repair time is performed to carry out JIT manner with the requirement of safety and SL. Finally, a case study considering two CFM-56 aeroengines is presented to demonstrate the proposed approach. The results show that predictive accuracy of RUL with SSM is higher than with linear regression, and the process of SV decisions is simple and feasible for airlines to improve the inventory management level of their aeroengines.

  7. Microarray analysis of gene expression profiles in ripening pineapple fruits.

    Science.gov (United States)

    Koia, Jonni H; Moyle, Richard L; Botella, Jose R

    2012-12-18

    Pineapple (Ananas comosus) is a tropical fruit crop of significant commercial importance. Although the physiological changes that occur during pineapple fruit development have been well characterized, little is known about the molecular events that occur during the fruit ripening process. Understanding the molecular basis of pineapple fruit ripening will aid the development of new varieties via molecular breeding or genetic modification. In this study we developed a 9277 element pineapple microarray and used it to profile gene expression changes that occur during pineapple fruit ripening. Microarray analyses identified 271 unique cDNAs differentially expressed at least 1.5-fold between the mature green and mature yellow stages of pineapple fruit ripening. Among these 271 sequences, 184 share significant homology with genes encoding proteins of known function, 53 share homology with genes encoding proteins of unknown function and 34 share no significant homology with any database accession. Of the 237 pineapple sequences with homologs, 160 were up-regulated and 77 were down-regulated during pineapple fruit ripening. DAVID Functional Annotation Cluster (FAC) analysis of all 237 sequences with homologs revealed confident enrichment scores for redox activity, organic acid metabolism, metalloenzyme activity, glycolysis, vitamin C biosynthesis, antioxidant activity and cysteine peptidase activity, indicating the functional significance and importance of these processes and pathways during pineapple fruit development. Quantitative real-time PCR analysis validated the microarray expression results for nine out of ten genes tested. This is the first report of a microarray based gene expression study undertaken in pineapple. Our bioinformatic analyses of the transcript profiles have identified a number of genes, processes and pathways with putative involvement in the pineapple fruit ripening process. This study extends our knowledge of the molecular basis of pineapple fruit

  8. Integrated olfactory receptor and microarray gene expression databases

    Directory of Open Access Journals (Sweden)

    Crasto Chiquito J

    2007-06-01

    Full Text Available Abstract Background Gene expression patterns of olfactory receptors (ORs are an important component of the signal encoding mechanism in the olfactory system since they determine the interactions between odorant ligands and sensory neurons. We have developed the Olfactory Receptor Microarray Database (ORMD to house OR gene expression data. ORMD is integrated with the Olfactory Receptor Database (ORDB, which is a key repository of OR gene information. Both databases aim to aid experimental research related to olfaction. Description ORMD is a Web-accessible database that provides a secure data repository for OR microarray experiments. It contains both publicly available and private data; accessing the latter requires authenticated login. The ORMD is designed to allow users to not only deposit gene expression data but also manage their projects/experiments. For example, contributors can choose whether to make their datasets public. For each experiment, users can download the raw data files and view and export the gene expression data. For each OR gene being probed in a microarray experiment, a hyperlink to that gene in ORDB provides access to genomic and proteomic information related to the corresponding olfactory receptor. Individual ORs archived in ORDB are also linked to ORMD, allowing users access to the related microarray gene expression data. Conclusion ORMD serves as a data repository and project management system. It facilitates the study of microarray experiments of gene expression in the olfactory system. In conjunction with ORDB, ORMD integrates gene expression data with the genomic and functional data of ORs, and is thus a useful resource for both olfactory researchers and the public.

  9. A new modified histogram matching normalization for time series microarray analysis

    NARCIS (Netherlands)

    Astola, L.J.; Molenaar, J.

    2014-01-01

    Microarray data is often utilized in inferring regulatory networks. Quantile normalization (QN) is a popular method to reduce array-to-array variation. We show that in the context of time series measurements QN may not be the best choice for this task, especially not if the inference is based on

  10. A Framework of a Computerized Decision Aid to Improve Group Judgments

    Directory of Open Access Journals (Sweden)

    Utpal Bose

    2009-09-01

    Full Text Available In organizations, groups of decision makers often meet to make judgments as a group on issues and tasks such as, hiring a person who best fits an open position. In such tasks called cognitive conflict tasks, where there is no conflict of interest, group members attempting to reach a common solution often differ on their perspectives to the problem. Cognitive conflicts have been studied in the context of Social Judgment Theory, which posits that persons or judges make a set of judgments about a set of events based on observation of a set of cues related to the events. Disagreement arises because the judges fail to understand each other’s judgment making policies. In order to reduce disagreement and move the group towards a group judgment policy that has the consensus of the group members and is applied consistently, a computerized decision aid is proposed that can be built around a Group Support System using cognitive mapping as a method of providing cognitive feedback and the Analytic Hierarchy Process to process the conflicting criteria and help an individual formulate a judgment policy, as well as aggregate the individual policies into a group judgment policy. It is argued that such as decision aid by supporting every decision maker in the group to effectively use information about the task so that they have a good understanding of the judgment policy they form, to communicate their evaluation policies accurately to other members, and by providing an iterative mechanism through which members can arrive at a compromise solution to the task, is expected to improve the quality of group judgments.

  11. Dimension reduction methods for microarray data: a review

    Directory of Open Access Journals (Sweden)

    Rabia Aziz

    2017-03-01

    Full Text Available Dimension reduction has become inevitable for pre-processing of high dimensional data. “Gene expression microarray data” is an instance of such high dimensional data. Gene expression microarray data displays the maximum number of genes (features simultaneously at a molecular level with a very small number of samples. The copious numbers of genes are usually provided to a learning algorithm for producing a complete characterization of the classification task. However, most of the times the majority of the genes are irrelevant or redundant to the learning task. It will deteriorate the learning accuracy and training speed as well as lead to the problem of overfitting. Thus, dimension reduction of microarray data is a crucial preprocessing step for prediction and classification of disease. Various feature selection and feature extraction techniques have been proposed in the literature to identify the genes, that have direct impact on the various machine learning algorithms for classification and eliminate the remaining ones. This paper describes the taxonomy of dimension reduction methods with their characteristics, evaluation criteria, advantages and disadvantages. It also presents a review of numerous dimension reduction approaches for microarray data, mainly those methods that have been proposed over the past few years.

  12. The detection and differentiation of canine respiratory pathogens using oligonucleotide microarrays.

    Science.gov (United States)

    Wang, Lih-Chiann; Kuo, Ya-Ting; Chueh, Ling-Ling; Huang, Dean; Lin, Jiunn-Horng

    2017-05-01

    Canine respiratory diseases are commonly seen in dogs along with co-infections with multiple respiratory pathogens, including viruses and bacteria. Virus infections in even vaccinated dogs were also reported. The clinical signs caused by different respiratory etiological agents are similar, which makes differential diagnosis imperative. An oligonucleotide microarray system was developed in this study. The wild type and vaccine strains of canine distemper virus (CDV), influenza virus, canine herpesvirus (CHV), Bordetella bronchiseptica and Mycoplasma cynos were detected and differentiated simultaneously on a microarray chip. The detection limit is 10, 10, 100, 50 and 50 copy numbers for CDV, influenza virus, CHV, B. bronchiseptica and M. cynos, respectively. The clinical test results of nasal swab samples showed that the microarray had remarkably better efficacy than the multiplex PCR-agarose gel method. The positive detection rate of microarray and agarose gel was 59.0% (n=33) and 41.1% (n=23) among the 56 samples, respectively. CDV vaccine strain and pathogen co-infections were further demonstrated by the microarray but not by the multiplex PCR-agarose gel. The oligonucleotide microarray provides a highly efficient diagnosis alternative that could be applied to clinical usage, greatly assisting in disease therapy and control. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Washing scaling of GeneChip microarray expression

    Directory of Open Access Journals (Sweden)

    Krohn Knut

    2010-05-01

    Full Text Available Abstract Background Post-hybridization washing is an essential part of microarray experiments. Both the quality of the experimental washing protocol and adequate consideration of washing in intensity calibration ultimately affect the quality of the expression estimates extracted from the microarray intensities. Results We conducted experiments on GeneChip microarrays with altered protocols for washing, scanning and staining to study the probe-level intensity changes as a function of the number of washing cycles. For calibration and analysis of the intensity data we make use of the 'hook' method which allows intensity contributions due to non-specific and specific hybridization of perfect match (PM and mismatch (MM probes to be disentangled in a sequence specific manner. On average, washing according to the standard protocol removes about 90% of the non-specific background and about 30-50% and less than 10% of the specific targets from the MM and PM, respectively. Analysis of the washing kinetics shows that the signal-to-noise ratio doubles roughly every ten stringent washing cycles. Washing can be characterized by time-dependent rate constants which reflect the heterogeneous character of target binding to microarray probes. We propose an empirical washing function which estimates the survival of probe bound targets. It depends on the intensity contribution due to specific and non-specific hybridization per probe which can be estimated for each probe using existing methods. The washing function allows probe intensities to be calibrated for the effect of washing. On a relative scale, proper calibration for washing markedly increases expression measures, especially in the limit of small and large values. Conclusions Washing is among the factors which potentially distort expression measures. The proposed first-order correction method allows direct implementation in existing calibration algorithms for microarray data. We provide an experimental

  14. The Importance of Normalization on Large and Heterogeneous Microarray Datasets

    Science.gov (United States)

    DNA microarray technology is a powerful functional genomics tool increasingly used for investigating global gene expression in environmental studies. Microarrays can also be used in identifying biological networks, as they give insight on the complex gene-to-gene interactions, ne...

  15. An Intuitionistic Fuzzy Stochastic Decision-Making Method Based on Case-Based Reasoning and Prospect Theory

    Directory of Open Access Journals (Sweden)

    Peng Li

    2017-01-01

    Full Text Available According to the case-based reasoning method and prospect theory, this paper mainly focuses on finding a way to obtain decision-makers’ preferences and the criterion weights for stochastic multicriteria decision-making problems and classify alternatives. Firstly, we construct a new score function for an intuitionistic fuzzy number (IFN considering the decision-making environment. Then, we aggregate the decision-making information in different natural states according to the prospect theory and test decision-making matrices. A mathematical programming model based on a case-based reasoning method is presented to obtain the criterion weights. Moreover, in the original decision-making problem, we integrate all the intuitionistic fuzzy decision-making matrices into an expectation matrix using the expected utility theory and classify or rank the alternatives by the case-based reasoning method. Finally, two illustrative examples are provided to illustrate the implementation process and applicability of the developed method.

  16. Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions.

    Science.gov (United States)

    Nantha, Yogarabindranath Swarna

    2017-11-01

    A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy.

  17. A NASBA on microgel-tethered molecular-beacon microarray for real-time microbial molecular diagnostics.

    Science.gov (United States)

    Ma, Y; Dai, X; Hong, T; Munk, G B; Libera, M

    2016-12-19

    Despite their many advantages and successes, molecular beacon (MB) hybridization probes have not been extensively used in microarray formats because of the complicating probe-substrate interactions that increase the background intensity. We have previously shown that tethering to surface-patterned microgels is an effective means for localizing MB probes to specific surface locations in a microarray format while simultaneously maintaining them in as water-like an environment as possible and minimizing probe-surface interactions. Here we extend this approach to include both real-time detection together with integrated NASBA amplification. We fabricate small (∼250 μm × 250 μm) simplex, duplex, and five-plex assays with microarray spots of controllable size (∼20 μm diameter), position, and shape to detect bacteria and fungi in a bloodstream-infection model. The targets, primers, and microgel-tethered probes can be combined in a single isothermal reaction chamber with no post-amplification labelling. We extract total RNA from clinical blood samples and differentiate between Gram-positive and Gram-negative bloodstream infection in a duplex assay to detect RNA- amplicons. The sensitivity based on our current protocols in a simplex assay to detect specific ribosomal RNA sequences within total RNA extracted from S. aureus and E. coli cultures corresponds to tens of bacteria per ml. We furthermore show that the platform can detect RNA- amplicons from synthetic target DNA with 1 fM sensitivity in sample volumes that contain about 12 000 DNA molecules. These experiments demonstrate an alternative approach that can enable rapid and real-time microarray-based molecular diagnostics.

  18. Assessing probe-specific dye and slide biases in two-color microarray data

    Directory of Open Access Journals (Sweden)

    Goldberg Zelanna

    2008-07-01

    Full Text Available Abstract Background A primary reason for using two-color microarrays is that the use of two samples labeled with different dyes on the same slide, that bind to probes on the same spot, is supposed to adjust for many factors that introduce noise and errors into the analysis. Most users assume that any differences between the dyes can be adjusted out by standard methods of normalization, so that measures such as log ratios on the same slide are reliable measures of comparative expression. However, even after the normalization, there are still probe specific dye and slide variation among the data. We define a method to quantify the amount of the dye-by-probe and slide-by-probe interaction. This serves as a diagnostic, both visual and numeric, of the existence of probe-specific dye bias. We show how this improved the performance of two-color array analysis for arrays for genomic analysis of biological samples ranging from rice to human tissue. Results We develop a procedure for quantifying the extent of probe-specific dye and slide bias in two-color microarrays. The primary output is a graphical diagnostic of the extent of the bias which called ECDF (Empirical Cumulative Distribution Function, though numerical results are also obtained. Conclusion We show that the dye and slide biases were high for human and rice genomic arrays in two gene expression facilities, even after the standard intensity-based normalization, and describe how this diagnostic allowed the problems causing the probe-specific bias to be addressed, and resulted in important improvements in performance. The R package LMGene which contains the method described in this paper has been available to download from Bioconductor.

  19. Towards High-throughput Immunomics for Infectious Diseases: Use of Next-generation Peptide Microarrays for Rapid Discovery and Mapping of Antigenic Determinants

    DEFF Research Database (Denmark)

    J. Carmona, Santiago; Nielsen, Morten; Schafer-Nielsen, Claus

    2015-01-01

    , we developed a highly-multiplexed platform based on next-generation high-density peptide microarrays to map these specificities in Chagas Disease, an exemplar of a human infectious disease caused by the protozoan Trypanosoma cruzi. We designed a high-density peptide microarray containing more than...

  20. Utilizing Ultrasound Technology to Improve Livestock Marketing Decisions

    OpenAIRE

    Jayson L. Lusk; Randall Little; Allen Williams; John Anderson; Blair McKinley

    2003-01-01

    This study estimates the value of using ultrasound technology to improve cattle marketing decisions by optimally choosing a particular marketing method. For the particular group of cattle analyzed, results indicate that using ultrasound information to selectively market cattle could have increased revenue by $25.53/head, $4.98/head, or $32.90/head, compared with simply marketing all animals on a live weight, dressed weight, or grid basis, respectively. Even if producers incorporate such infor...