WorldWideScience

Sample records for microarray approach approximately

  1. A unified approach to the Darwin approximation

    International Nuclear Information System (INIS)

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-01-01

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting

  2. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  3. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  4. Two heuristic approaches to describe periodicities in genomic microarrays

    Directory of Open Access Journals (Sweden)

    Jörg Aßmus

    2009-09-01

    Full Text Available In the first part we discuss the filtering of panels of time series based on singular value decomposition. The discussion is based on an approach where this filtering is used to normalize microarray data. We point out effects on the periodicity and phases for time series panels. In the second part we investigate time dependent periodic panels with different phases. We align the time series in the panel and discuss the periodogram of the aligned time series with the purpose of describing the periodic structure of the panel. The method is quite powerful assuming known phases in the model, but it deteriorates rapidly for noisy data.  

  5. DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Tewfik Ahmed H

    2006-01-01

    Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.

  6. Random phase approximation in relativistic approach

    International Nuclear Information System (INIS)

    Ma Zhongyu; Yang Ding; Tian Yuan; Cao Ligang

    2009-01-01

    Some special issues of the random phase approximation(RPA) in the relativistic approach are reviewed. A full consistency and proper treatment of coupling to the continuum are responsible for the successful application of the RPA in the description of dynamical properties of finite nuclei. The fully consistent relativistic RPA(RRPA) requires that the relativistic mean filed (RMF) wave function of the nucleus and the RRPA correlations are calculated in a same effective Lagrangian and the consistent treatment of the Dirac sea of negative energy states. The proper treatment of the single particle continuum with scattering asymptotic conditions in the RMF and RRPA is discussed. The full continuum spectrum can be described by the single particle Green's function and the relativistic continuum RPA is established. A separable form of the paring force is introduced in the relativistic quasi-particle RPA. (authors)

  7. An Overview of DNA Microarray Grid Alignment and Foreground Separation Approaches

    Directory of Open Access Journals (Sweden)

    Bajcsy Peter

    2006-01-01

    Full Text Available This paper overviews DNA microarray grid alignment and foreground separation approaches. Microarray grid alignment and foreground separation are the basic processing steps of DNA microarray images that affect the quality of gene expression information, and hence impact our confidence in any data-derived biological conclusions. Thus, understanding microarray data processing steps becomes critical for performing optimal microarray data analysis. In the past, the grid alignment and foreground separation steps have not been covered extensively in the survey literature. We present several classifications of existing algorithms, and describe the fundamental principles of these algorithms. Challenges related to automation and reliability of processed image data are outlined at the end of this overview paper.

  8. DNA Microarray Technologies: A Novel Approach to Geonomic Research

    Energy Technology Data Exchange (ETDEWEB)

    Hinman, R.; Thrall, B.; Wong, K,

    2002-01-01

    A cDNA microarray allows biologists to examine the expression of thousands of genes simultaneously. Researchers may analyze the complete transcriptional program of an organism in response to specific physiological or developmental conditions. By design, a cDNA microarray is an experiment with many variables and few controls. One question that inevitably arises when working with a cDNA microarray is data reproducibility. How easy is it to confirm mRNA expression patterns? In this paper, a case study involving the treatment of a murine macrophage RAW 264.7 cell line with tumor necrosis factor alpha (TNF) was used to obtain a rough estimate of data reproducibility. Two trials were examined and a list of genes displaying either a > 2-fold or > 4-fold increase in gene expression was compiled. Variations in signal mean ratios between the two slides were observed. We can assume that erring in reproducibility may be compensated by greater inductive levels of similar genes. Steps taken to obtain results included serum starvation of cells before treatment, tests of mRNA for quality/consistency, and data normalization.

  9. Uropathogenic Escherichia coli virulence genes: invaluable approaches for designing DNA microarray probes.

    Science.gov (United States)

    Jahandeh, Nadia; Ranjbar, Reza; Behzadi, Payam; Behzadi, Elham

    2015-01-01

    The pathotypes of uropathogenic Escherichia coli (UPEC) cause different types of urinary tract infections (UTIs). The presence of a wide range of virulence genes in UPEC enables us to design appropriate DNA microarray probes. These probes, which are used in DNA microarray technology, provide us with an accurate and rapid diagnosis and definitive treatment in association with UTIs caused by UPEC pathotypes. The main goal of this article is to introduce the UPEC virulence genes as invaluable approaches for designing DNA microarray probes. Main search engines such as Google Scholar and databases like NCBI were searched to find and study several original pieces of literature, review articles, and DNA gene sequences. In parallel with in silico studies, the experiences of the authors were helpful for selecting appropriate sources and writing this review article. There is a significant variety of virulence genes among UPEC strains. The DNA sequences of virulence genes are fabulous patterns for designing microarray probes. The location of virulence genes and their sequence lengths influence the quality of probes. The use of selected virulence genes for designing microarray probes gives us a wide range of choices from which the best probe candidates can be chosen. DNA microarray technology provides us with an accurate, rapid, cost-effective, sensitive, and specific molecular diagnostic method which is facilitated by designing microarray probes. Via these tools, we are able to have an accurate diagnosis and a definitive treatment regarding UTIs caused by UPEC pathotypes.

  10. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  11. Serious limitations of the QTL/Microarray approach for QTL gene discovery

    Directory of Open Access Journals (Sweden)

    Warden Craig H

    2010-07-01

    Full Text Available Abstract Background It has been proposed that the use of gene expression microarrays in nonrecombinant parental or congenic strains can accelerate the process of isolating individual genes underlying quantitative trait loci (QTL. However, the effectiveness of this approach has not been assessed. Results Thirty-seven studies that have implemented the QTL/microarray approach in rodents were reviewed. About 30% of studies showed enrichment for QTL candidates, mostly in comparisons between congenic and background strains. Three studies led to the identification of an underlying QTL gene. To complement the literature results, a microarray experiment was performed using three mouse congenic strains isolating the effects of at least 25 biometric QTL. Results show that genes in the congenic donor regions were preferentially selected. However, within donor regions, the distribution of differentially expressed genes was homogeneous once gene density was accounted for. Genes within identical-by-descent (IBD regions were less likely to be differentially expressed in chromosome 2, but not in chromosomes 11 and 17. Furthermore, expression of QTL regulated in cis (cis eQTL showed higher expression in the background genotype, which was partially explained by the presence of single nucleotide polymorphisms (SNP. Conclusions The literature shows limited successes from the QTL/microarray approach to identify QTL genes. Our own results from microarray profiling of three congenic strains revealed a strong tendency to select cis-eQTL over trans-eQTL. IBD regions had little effect on rate of differential expression, and we provide several reasons why IBD should not be used to discard eQTL candidates. In addition, mismatch probes produced false cis-eQTL that could not be completely removed with the current strains genotypes and low probe density microarrays. The reviewed studies did not account for lack of coverage from the platforms used and therefore removed genes

  12. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  13. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  14. A novel approach for dimension reduction of microarray.

    Science.gov (United States)

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Clustering approaches to identifying gene expression patterns from DNA microarray data.

    Science.gov (United States)

    Do, Jin Hwan; Choi, Dong-Kug

    2008-04-30

    The analysis of microarray data is essential for large amounts of gene expression data. In this review we focus on clustering techniques. The biological rationale for this approach is the fact that many co-expressed genes are co-regulated, and identifying co-expressed genes could aid in functional annotation of novel genes, de novo identification of transcription factor binding sites and elucidation of complex biological pathways. Co-expressed genes are usually identified in microarray experiments by clustering techniques. There are many such methods, and the results obtained even for the same datasets may vary considerably depending on the algorithms and metrics for dissimilarity measures used, as well as on user-selectable parameters such as desired number of clusters and initial values. Therefore, biologists who want to interpret microarray data should be aware of the weakness and strengths of the clustering methods used. In this review, we survey the basic principles of clustering of DNA microarray data from crisp clustering algorithms such as hierarchical clustering, K-means and self-organizing maps, to complex clustering algorithms like fuzzy clustering.

  16. Holey carbon micro-arrays for transmission electron microscopy: A microcontact printing approach

    International Nuclear Information System (INIS)

    Chester, David W.; Klemic, James F.; Stern, Eric; Sigworth, Fred J.; Klemic, Kathryn G.

    2007-01-01

    We have used a microcontact printing approach to produce high quality and inexpensive holey carbon micro-arrays. Fabrication involves: (1) micromolding a poly(dimethylsiloxane) (PDMS) elastomer stamp from a microfabricated master that contains the desired array pattern; (2) using the PDMS stamp for microcontact printing a thin sacrificial plastic film that contains an array of holes; (3) floating the plastic film onto TEM grids; (4) evaporating carbon onto the plastic film and (5) removing the sacrificial plastic film. The final holey carbon micro-arrays are ready for use as support films in TEM applications with the fidelity of the original microfabricated pattern. This approach is cost effective as both the master and the stamps have long-term reusability. Arbitrary array patterns can be made with microfabricated masters made through a single-step photolithographic process

  17. Sinc-Approximations of Fractional Operators: A Computing Approach

    Directory of Open Access Journals (Sweden)

    Gerd Baumann

    2015-06-01

    Full Text Available We discuss a new approach to represent fractional operators by Sinc approximation using convolution integrals. A spin off of the convolution representation is an effective inverse Laplace transform. Several examples demonstrate the application of the method to different practical problems.

  18. Approximate Approaches to the One-Dimensional Finite Potential Well

    Science.gov (United States)

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A.

    2011-01-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m[subscript i]) is taken to be distinct from mass outside (m[subscript o]). A relevant parameter is the mass…

  19. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  20. Hybrid Feature Selection Approach Based on GRASP for Cancer Microarray Data

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2017-01-01

    Full Text Available Microarray data usually contain a large number of genes, but a small number of samples. Feature subset selection for microarray data aims at reducing the number of genes so that useful information can be extracted from the samples. Reducing the dimension of data sets further helps in improving the computational efficiency of the learning model. In this paper, we propose a modified algorithm based on the tabu search as local search procedures to a Greedy Randomized Adaptive Search Procedure (GRASP for high dimensional microarray data sets. The proposed Tabu based Greedy Randomized Adaptive Search Procedure algorithm is named as TGRASP. In TGRASP, a new parameter has been introduced named as Tabu Tenure and the existing parameters, NumIter and size have been modified. We observed that different parameter settings affect the quality of the optimum. The second proposed algorithm known as FFGRASP (Firefly Greedy Randomized Adaptive Search Procedure uses a firefly optimization algorithm in the local search optimzation phase of the greedy randomized adaptive search procedure (GRASP. Firefly algorithm is one of the powerful algorithms for optimization of multimodal applications. Experimental results show that the proposed TGRASP and FFGRASP algorithms are much better than existing algorithm with respect to three performance parameters viz. accuracy, run time, number of a selected subset of features. We have also compared both the approaches with a unified metric (Extended Adjusted Ratio of Ratios which has shown that TGRASP approach outperforms existing approach for six out of nine cancer microarray datasets and FFGRASP performs better on seven out of nine datasets.

  1. Monoenergetic approximation of a polyenergetic beam: a theoretical approach

    International Nuclear Information System (INIS)

    Robinson, D.M.; Scrimger, J.W.

    1991-01-01

    There exist numerous occasions in which it is desirable to approximate the polyenergetic beams employed in radiation therapy by a beam of photons of a single energy. In some instances, commonly used rules of thumb for the selection of an appropriate energy may be valid. A more accurate approximate energy, however, may be determined by an analysis which takes into account both the spectral qualities of the beam and the material through which it passes. The theoretical basis of this method of analysis is presented in this paper. Experimental agreement with theory for a range of materials and beam qualities is also presented and demonstrates the validity of the theoretical approach taken. (author)

  2. Approximate approaches to the one-dimensional finite potential well

    International Nuclear Information System (INIS)

    Singh, Shilpi; Pathak, Praveen; Singh, Vijay A

    2011-01-01

    The one-dimensional finite well is a textbook problem. We propose approximate approaches to obtain the energy levels of the well. The finite well is also encountered in semiconductor heterostructures where the carrier mass inside the well (m i ) is taken to be distinct from mass outside (m o ). A relevant parameter is the mass discontinuity ratio β = m i /m o . To correctly account for the mass discontinuity, we apply the BenDaniel-Duke boundary condition. We obtain approximate solutions for two cases: when the well is shallow and when the well is deep. We compare the approximate results with the exact results and find that higher-order approximations are quite robust. For the shallow case, the approximate solution can be expressed in terms of a dimensionless parameter σ l = 2m o V 0 L 2 /ℎ 2 (or σ = β 2 σ l for the deep case). We show that the lowest-order results are related by a duality transform. We also discuss how the energy upscales with L (E∼1/L γ ) and obtain the exponent γ. Exponent γ → 2 when the well is sufficiently deep and β → 1. The ratio of the masses dictates the physics. Our presentation is pedagogical and should be useful to students on a first course on elementary quantum mechanics or low-dimensional semiconductors.

  3. Leptospiral outer membrane protein microarray, a novel approach to identification of host ligand-binding proteins.

    Science.gov (United States)

    Pinne, Marija; Matsunaga, James; Haake, David A

    2012-11-01

    Leptospirosis is a zoonosis with worldwide distribution caused by pathogenic spirochetes belonging to the genus Leptospira. The leptospiral life cycle involves transmission via freshwater and colonization of the renal tubules of their reservoir hosts. Infection requires adherence to cell surfaces and extracellular matrix components of host tissues. These host-pathogen interactions involve outer membrane proteins (OMPs) expressed on the bacterial surface. In this study, we developed an Leptospira interrogans serovar Copenhageni strain Fiocruz L1-130 OMP microarray containing all predicted lipoproteins and transmembrane OMPs. A total of 401 leptospiral genes or their fragments were transcribed and translated in vitro and printed on nitrocellulose-coated glass slides. We investigated the potential of this protein microarray to screen for interactions between leptospiral OMPs and fibronectin (Fn). This approach resulted in the identification of the recently described fibronectin-binding protein, LIC10258 (MFn8, Lsa66), and 14 novel Fn-binding proteins, denoted Microarray Fn-binding proteins (MFns). We confirmed Fn binding of purified recombinant LIC11612 (MFn1), LIC10714 (MFn2), LIC11051 (MFn6), LIC11436 (MFn7), LIC10258 (MFn8, Lsa66), and LIC10537 (MFn9) by far-Western blot assays. Moreover, we obtained specific antibodies to MFn1, MFn7, MFn8 (Lsa66), and MFn9 and demonstrated that MFn1, MFn7, and MFn9 are expressed and surface exposed under in vitro growth conditions. Further, we demonstrated that MFn1, MFn4 (LIC12631, Sph2), and MFn7 enable leptospires to bind fibronectin when expressed in the saprophyte, Leptospira biflexa. Protein microarrays are valuable tools for high-throughput identification of novel host ligand-binding proteins that have the potential to play key roles in the virulence mechanisms of pathogens.

  4. Utility of the pooling approach as applied to whole genome association scans with high-density Affymetrix microarrays

    Directory of Open Access Journals (Sweden)

    Gray Joanna

    2010-11-01

    Full Text Available Abstract Background We report an attempt to extend the previously successful approach of combining SNP (single nucleotide polymorphism microarrays and DNA pooling (SNP-MaP employing high-density microarrays. Whereas earlier studies employed a range of Affymetrix SNP microarrays comprising from 10 K to 500 K SNPs, this most recent investigation used the 6.0 chip which displays 906,600 SNP probes and 946,000 probes for the interrogation of CNVs (copy number variations. The genotyping assay using the Affymetrix SNP 6.0 array is highly demanding on sample quality due to the small feature size, low redundancy, and lack of mismatch probes. Findings In the first study published so far using this microarray on pooled DNA, we found that pooled cheek swab DNA could not accurately predict real allele frequencies of the samples that comprised the pools. In contrast, the allele frequency estimates using blood DNA pools were reasonable, although inferior compared to those obtained with previously employed Affymetrix microarrays. However, it might be possible to improve performance by developing improved analysis methods. Conclusions Despite the decreasing costs of genome-wide individual genotyping, the pooling approach may have applications in very large-scale case-control association studies. In such cases, our study suggests that high-quality DNA preparations and lower density platforms should be preferred.

  5. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    Science.gov (United States)

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  7. An approximate approach to quantum mechanical study of biomacromolecules

    Science.gov (United States)

    Chen, Xihua

    method/basis-set levels of the quantum chemical calculation on the MFCC-downhill simplex optimization are also discussed. Finally, the MFCC-downhill simplex method is tested, as a general multiatomic case study, on a molecular system of cyclo-AAGAGG·H 2O to optimize the binding structure of water molecule to the fixed cyclohexapeptide. The MFCC-downhill simplex optimization results in good agreement with the crystal structure. The MFCC-downhill simplex method should be applicable to optimize the structures of ligands that bind to biomacromolecules such as proteins and DNAs. In Chapter 4, we propose a new approximate method for efficient calculation of biomacromolecular electronic properties, using a Density Matrix (DM) scheme which is integrated with the MFCC approach. In this MFCC-DM method, a biomacro-molecule such as a protein is partitioned by an MFCC scheme into properly capped fragments and concaps whose density matrices are calculated by conventional ab initio methods. These sub-system density matrices are then assembled to construct the full system density matrix which is finally employed to calculate the electronic energy, dipole moment, electronic density, electrostatic potential, etc., of the protein using Hartree-Fock or Density Functional Theory methods. By this MFCC-DM method, the self-consistent field (SCF) procedure for solving the full Hamiltonian problem is circumvented. Two implementations of this approach, MFCC-SDM and MFCC-GDM, are discussed. Systematic numerical studies are carried out on a series of extended polyglycines CH3CO-(GLY) n-NHCH3 (n=3-25) and excellent results are obtained. In Chapter 5, we present an improvement of MFCC-DM method and introduce a pairwise interaction correction (PIC) with which the MFCC-DM method is applicable to study a real-world protein with short-range structural complexity such as hydrogen bonding and close contact. In this MFCC-DM-PIC method, a protein molecule is partitioned into properly capped fragments and

  8. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.

    2011-01-01

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity

  9. Carbohydrate microarrays

    DEFF Research Database (Denmark)

    Park, Sungjin; Gildersleeve, Jeffrey C; Blixt, Klas Ola

    2012-01-01

    In the last decade, carbohydrate microarrays have been core technologies for analyzing carbohydrate-mediated recognition events in a high-throughput fashion. A number of methods have been exploited for immobilizing glycans on the solid surface in a microarray format. This microarray...... of substrate specificities of glycosyltransferases. This review covers the construction of carbohydrate microarrays, detection methods of carbohydrate microarrays and their applications in biological and biomedical research....

  10. Controllability distributions and systems approximations: a geometric approach

    NARCIS (Netherlands)

    Ruiz, A.C.; Nijmeijer, Henk

    1994-01-01

    Given a nonlinear system we determine a relation at an equilibrium between controllability distributions defined for a nonlinear system and a Taylor series approximation of it. The value of such a relation is appreciated if we recall that the solvability conditions as well as the solutions to some

  11. Controllability distributions and systems approximations: a geometric approach

    NARCIS (Netherlands)

    Ruiz, A.C.; Nijmeijer, Henk

    1992-01-01

    Given a nonlinear system, a relation between controllability distributions defined for a nonlinear system and a Taylor series approximation of it is determined. Special attention is given to this relation at the equilibrium. It is known from nonlinear control theory that the solvability conditions

  12. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  13. An Approximation Approach for Solving the Subpath Planning Problem

    OpenAIRE

    Safilian, Masoud; Tashakkori, S. Mehdi; Eghbali, Sepehr; Safilian, Aliakbar

    2016-01-01

    The subpath planning problem is a branch of the path planning problem, which has widespread applications in automated manufacturing process as well as vehicle and robot navigation. This problem is to find the shortest path or tour subject for travelling a set of given subpaths. The current approaches for dealing with the subpath planning problem are all based on meta-heuristic approaches. It is well-known that meta-heuristic based approaches have several deficiencies. To address them, we prop...

  14. Earth's core convection: Boussinesq approximation or incompressible approach?

    Czech Academy of Sciences Publication Activity Database

    Anufriev, A. P.; Hejda, Pavel

    2010-01-01

    Roč. 104, č. 1 (2010), s. 65-83 ISSN 0309-1929 R&D Projects: GA AV ČR IAA300120704 Grant - others:INTAS(XE) 03-51-5807 Institutional research plan: CEZ:AV0Z30120515 Keywords : geodynamic models * core convection * Boussinesq approximation Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.831, year: 2010

  15. An approximate methods approach to probabilistic structural analysis

    Science.gov (United States)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  16. Bessel collocation approach for approximate solutions of Hantavirus infection model

    Directory of Open Access Journals (Sweden)

    Suayip Yuzbasi

    2017-11-01

    Full Text Available In this study, a collocation method is introduced to find the approximate solutions of Hantavirus infection model which is a system of nonlinear ordinary differential equations. The method is based on the Bessel functions of the first kind, matrix operations and collocation points. This method converts Hantavirus infection model into a matrix equation in terms of the Bessel functions of first kind, matrix operations and collocation points. The matrix equation corresponds to a system of nonlinear equations with the unknown Bessel coefficients. The reliability and efficiency of the suggested scheme are demonstrated by numerical applications and all numerical calculations have been done by using a program written in Maple.

  17. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan

    2011-05-14

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.

  18. Approximate dynamic programming approaches for appointment scheduling with patient preferences.

    Science.gov (United States)

    Li, Xin; Wang, Jin; Fung, Richard Y K

    2018-04-01

    During the appointment booking process in out-patient departments, the level of patient satisfaction can be affected by whether or not their preferences can be met, including the choice of physicians and preferred time slot. In addition, because the appointments are sequential, considering future possible requests is also necessary for a successful appointment system. This paper proposes a Markov decision process model for optimizing the scheduling of sequential appointments with patient preferences. In contrast to existing models, the evaluation of a booking decision in this model focuses on the extent to which preferences are satisfied. Characteristics of the model are analysed to develop a system for formulating booking policies. Based on these characteristics, two types of approximate dynamic programming algorithms are developed to avoid the curse of dimensionality. Experimental results suggest directions for further fine-tuning of the model, as well as improving the efficiency of the two proposed algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Vaccine-induced modulation of gene expression in turbot peritoneal cells. A microarray approach.

    Science.gov (United States)

    Fontenla, Francisco; Blanco-Abad, Verónica; Pardo, Belén G; Folgueira, Iria; Noia, Manuel; Gómez-Tato, Antonio; Martínez, Paulino; Leiro, José M; Lamas, Jesús

    2016-07-01

    We used a microarray approach to examine changes in gene expression in turbot peritoneal cells after injection of the fish with vaccines containing the ciliate parasite Philasterides dicentrarchi as antigen and one of the following adjuvants: chitosan-PVMMA microspheres, Freund́s complete adjuvant, aluminium hydroxide gel or Matrix-Q (Isconova, Sweden). We identified 374 genes that were differentially expressed in all groups of fish. Forty-two genes related to tight junctions and focal adhesions and/or actin cytoskeleton were differentially expressed in free peritoneal cells. The profound changes in gene expression related to cell adherence and cytoskeleton may be associated with cell migration and also with the formation of cell-vaccine masses and their attachment to the peritoneal wall. Thirty-five genes related to apoptosis were differentially expressed. Although most of the proteins coded by these genes have a proapoptotic effect, others are antiapoptotic, indicating that both types of signals occur in peritoneal leukocytes of vaccinated fish. Interestingly, many of the genes related to lymphocytes and lymphocyte activity were downregulated in the groups injected with vaccine. We also observed decreased expression of genes related to antigen presentation, suggesting that macrophages (which were abundant in the peritoneal cavity after vaccination) did not express these during the early inflammatory response in the peritoneal cavity. Finally, several genes that participate in the inflammatory response were differentially expressed, and most participated in resolution of inflammation, indicating that an M2 macrophage response is generated in the peritoneal cavity of fish one day post vaccination. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Optimal and Approximate Approaches for Deployment of Heterogeneous Sensing Devices

    Directory of Open Access Journals (Sweden)

    Rabie Ramadan

    2007-04-01

    Full Text Available A modeling framework for the problem of deploying a set of heterogeneous sensors in a field with time-varying differential surveillance requirements is presented. The problem is formulated as mixed integer mathematical program with the objective to maximize coverage of a given field. Two metaheuristics are used to solve this problem. The first heuristic adopts a genetic algorithm (GA approach while the second heuristic implements a simulated annealing (SA algorithm. A set of experiments is used to illustrate the capabilities of the developed models and to compare their performance. The experiments investigate the effect of parameters related to the size of the sensor deployment problem including number of deployed sensors, size of the monitored field, and length of the monitoring horizon. They also examine several endogenous parameters related to the developed GA and SA algorithms.

  1. IsoGeneGUI : Multiple approaches for dose-response analysis of microarray data using R

    NARCIS (Netherlands)

    Otava, Martin; Sengupta, Rudradev; Shkedy, Ziv; Lin, Dan; Pramana, Setia; Verbeke, Tobias; Haldermans, Philippe; Hothorn, Ludwig A.; Gerhard, Daniel; Kuiper, Rebecca M.; Klinglmueller, Florian; Kasim, Adetayo

    2017-01-01

    The analysis of transcriptomic experiments with ordered covariates, such as dose-response data, has become a central topic in bioinformatics, in particular in omics studies. Consequently, multiple R packages on CRAN and Bioconductor are designed to analyse microarray data from various perspectives

  2. A Combinatory Approach for Selecting Prognostic Genes in Microarray Studies of Tumour Survivals

    Directory of Open Access Journals (Sweden)

    Qihua Tan

    2009-01-01

    Full Text Available Different from significant gene expression analysis which looks for genes that are differentially regulated, feature selection in the microarray-based prognostic gene expression analysis aims at finding a subset of marker genes that are not only differentially expressed but also informative for prediction. Unfortunately feature selection in literature of microarray study is predominated by the simple heuristic univariate gene filter paradigm that selects differentially expressed genes according to their statistical significances. We introduce a combinatory feature selection strategy that integrates differential gene expression analysis with the Gram-Schmidt process to identify prognostic genes that are both statistically significant and highly informative for predicting tumour survival outcomes. Empirical application to leukemia and ovarian cancer survival data through-within- and cross-study validations shows that the feature space can be largely reduced while achieving improved testing performances.

  3. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  4. A general approach for cache-oblivious range reporting and approximate range counting

    DEFF Research Database (Denmark)

    Afshani, Peyman; Hamilton, Chris; Zeh, Norbert

    2010-01-01

    We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range c...

  5. Piecewise-linear and bilinear approaches to nonlinear differential equations approximation problem of computational structural mechanics

    OpenAIRE

    Leibov Roman

    2017-01-01

    This paper presents a bilinear approach to nonlinear differential equations system approximation problem. Sometimes the nonlinear differential equations right-hand sides linearization is extremely difficult or even impossible. Then piecewise-linear approximation of nonlinear differential equations can be used. The bilinear differential equations allow to improve piecewise-linear differential equations behavior and reduce errors on the border of different linear differential equations systems ...

  6. Removing Batch Effects from Longitudinal Gene Expression - Quantile Normalization Plus ComBat as Best Approach for Microarray Transcriptome Data.

    Directory of Open Access Journals (Sweden)

    Christian Müller

    Full Text Available Technical variation plays an important role in microarray-based gene expression studies, and batch effects explain a large proportion of this noise. It is therefore mandatory to eliminate technical variation while maintaining biological variability. Several strategies have been proposed for the removal of batch effects, although they have not been evaluated in large-scale longitudinal gene expression data. In this study, we aimed at identifying a suitable method for batch effect removal in a large study of microarray-based longitudinal gene expression. Monocytic gene expression was measured in 1092 participants of the Gutenberg Health Study at baseline and 5-year follow up. Replicates of selected samples were measured at both time points to identify technical variability. Deming regression, Passing-Bablok regression, linear mixed models, non-linear models as well as ReplicateRUV and ComBat were applied to eliminate batch effects between replicates. In a second step, quantile normalization prior to batch effect correction was performed for each method. Technical variation between batches was evaluated by principal component analysis. Associations between body mass index and transcriptomes were calculated before and after batch removal. Results from association analyses were compared to evaluate maintenance of biological variability. Quantile normalization, separately performed in each batch, combined with ComBat successfully reduced batch effects and maintained biological variability. ReplicateRUV performed perfectly in the replicate data subset of the study, but failed when applied to all samples. All other methods did not substantially reduce batch effects in the replicate data subset. Quantile normalization plus ComBat appears to be a valuable approach for batch correction in longitudinal gene expression data.

  7. Merging Belief Propagation and the Mean Field Approximation: A Free Energy Approach

    DEFF Research Database (Denmark)

    Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro

    2013-01-01

    We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al. We show that the message passing fixed-point equations obtained with this combination...... correspond to stationary points of a constrained region-based free energy approximation. Moreover, we present a convergent implementation of these message passing fixed-point equations provided that the underlying factor graph fulfills certain technical conditions. In addition, we show how to include hard...

  8. A variational approach to moment-closure approximations for the kinetics of biomolecular reaction networks

    Science.gov (United States)

    Bronstein, Leo; Koeppl, Heinz

    2018-01-01

    Approximate solutions of the chemical master equation and the chemical Fokker-Planck equation are an important tool in the analysis of biomolecular reaction networks. Previous studies have highlighted a number of problems with the moment-closure approach used to obtain such approximations, calling it an ad hoc method. In this article, we give a new variational derivation of moment-closure equations which provides us with an intuitive understanding of their properties and failure modes and allows us to correct some of these problems. We use mixtures of product-Poisson distributions to obtain a flexible parametric family which solves the commonly observed problem of divergences at low system sizes. We also extend the recently introduced entropic matching approach to arbitrary ansatz distributions and Markov processes, demonstrating that it is a special case of variational moment closure. This provides us with a particularly principled approximation method. Finally, we extend the above approaches to cover the approximation of multi-time joint distributions, resulting in a viable alternative to process-level approximations which are often intractable.

  9. On the functional integral approach in quantum statistics. 1. Some approximations

    International Nuclear Information System (INIS)

    Dai Xianxi.

    1990-08-01

    In this paper the susceptibility of a Kondo system in a fairly wide temperature region is calculated in the first harmonic approximation in a functional integral approach. The comparison with that of the renormalization group theory shows that in this region the two results agree quite well. The expansion of the partition function with infinite independent harmonics for the Anderson model is studied. Some symmetry relations are generalized. It is a challenging problem to develop a functional integral approach including diagram analysis, mixed mode effects and some exact relations in the Anderson system proved in the functional integral approach. These topics will be discussed in the next paper. (author). 22 refs, 1 fig

  10. A texture based pattern recognition approach to distinguish melanoma from non-melanoma cells in histopathological tissue microarray sections.

    Directory of Open Access Journals (Sweden)

    Elton Rexhepaj

    Full Text Available AIMS: Immunohistochemistry is a routine practice in clinical cancer diagnostics and also an established technology for tissue-based research regarding biomarker discovery efforts. Tedious manual assessment of immunohistochemically stained tissue needs to be fully automated to take full advantage of the potential for high throughput analyses enabled by tissue microarrays and digital pathology. Such automated tools also need to be reproducible for different experimental conditions and biomarker targets. In this study we present a novel supervised melanoma specific pattern recognition approach that is fully automated and quantitative. METHODS AND RESULTS: Melanoma samples were immunostained for the melanocyte specific target, Melan-A. Images representing immunostained melanoma tissue were then digitally processed to segment regions of interest, highlighting Melan-A positive and negative areas. Color deconvolution was applied to each region of interest to separate the channel containing the immunohistochemistry signal from the hematoxylin counterstaining channel. A support vector machine melanoma classification model was learned from a discovery melanoma patient cohort (n = 264 and subsequently validated on an independent cohort of melanoma patient tissue sample images (n = 157. CONCLUSION: Here we propose a novel method that takes advantage of utilizing an immuhistochemical marker highlighting melanocytes to fully automate the learning of a general melanoma cell classification model. The presented method can be applied on any protein of interest and thus provides a tool for quantification of immunohistochemistry-based protein expression in melanoma.

  11. Tractable approximations for probabilistic models: The adaptive Thouless-Anderson-Palmer mean field approach

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables...... is required. our method adapts to the concrete couplings. We demonstrate the validity of our approach, which is so far restricted to models with nonglassy behavior? by replica calculations for a wide class of models as well as by simulations for a real data set....

  12. A microarray-based genotyping and genetic mapping approach for highly heterozygous outcrossing species enables localization of a large fraction of the unassembled Populus trichocarpa genome sequence.

    Science.gov (United States)

    Drost, Derek R; Novaes, Evandro; Boaventura-Novaes, Carolina; Benedict, Catherine I; Brown, Ryan S; Yin, Tongming; Tuskan, Gerald A; Kirst, Matias

    2009-06-01

    Microarrays have demonstrated significant power for genome-wide analyses of gene expression, and recently have also revolutionized the genetic analysis of segregating populations by genotyping thousands of loci in a single assay. Although microarray-based genotyping approaches have been successfully applied in yeast and several inbred plant species, their power has not been proven in an outcrossing species with extensive genetic diversity. Here we have developed methods for high-throughput microarray-based genotyping in such species using a pseudo-backcross progeny of 154 individuals of Populus trichocarpa and P. deltoides analyzed with long-oligonucleotide in situ-synthesized microarray probes. Our analysis resulted in high-confidence genotypes for 719 single-feature polymorphism (SFP) and 1014 gene expression marker (GEM) candidates. Using these genotypes and an established microsatellite (SSR) framework map, we produced a high-density genetic map comprising over 600 SFPs, GEMs and SSRs. The abundance of gene-based markers allowed us to localize over 35 million base pairs of previously unplaced whole-genome shotgun (WGS) scaffold sequence to putative locations in the genome of P. trichocarpa. A high proportion of sampled scaffolds could be verified for their placement with independently mapped SSRs, demonstrating the previously un-utilized power that high-density genotyping can provide in the context of map-based WGS sequence reassembly. Our results provide a substantial contribution to the continued improvement of the Populus genome assembly, while demonstrating the feasibility of microarray-based genotyping in a highly heterozygous population. The strategies presented are applicable to genetic mapping efforts in all plant species with similarly high levels of genetic diversity.

  13. Hermite-Pade approximation approach to hydromagnetic flows in convergent-divergent channels

    International Nuclear Information System (INIS)

    Makinde, O.D.

    2005-10-01

    The problem of two-dimensional, steady, nonlinear flow of an incompressible conducting viscous fluid in convergent-divergent channels under the influence of an externally applied homogeneous magnetic field is studied using a special type of Hermite-Pade approximation approach. This semi-numerical scheme offers some advantages over solutions obtained by using traditional methods such as finite differences, spectral method, shooting method, etc. It reveals the analytical structure of the solution function and the important properties of overall flow structure including velocity field, flow reversal control and bifurcations are discussed. (author)

  14. Magnetocaloric effect (MCE): Microscopic approach within Tyablikov approximation for anisotropic ferromagnets

    Energy Technology Data Exchange (ETDEWEB)

    Kotelnikova, O.A.; Prudnikov, V.N. [Physical Faculty, Lomonosov State University, Department of Magnetism, Moscow (Russian Federation); Rudoy, Yu.G., E-mail: rudikar@mail.ru [People' s Friendship University of Russia, Department of Theoretical Physics, Moscow (Russian Federation)

    2015-06-01

    The aim of this paper is to generalize the microscopic approach to the description of the magnetocaloric effect (MCE) started by Kokorina and Medvedev (E.E. Kokorina, M.V. Medvedev, Physica B 416 (2013) 29.) by applying it to the anisotropic ferromagnet of the “easy axis” type in two settings—with external magnetic field parallel and perpendicular to the axis of easy magnetization. In the last case there appears the field induced (or spin-reorientation) phase transition which occurs at the critical value of the external magnetic field. This value is proportional to the exchange anisotropy constant at low temperatures, but with the rise of temperature it may be renormalized (as a rule, proportional to the magnetization). We use the explicit form of the Hamiltonian of the anisotropic ferromagnet and apply widely used random phase approximation (RPA) (known also as Tyablikov approximation in the Green function method) which is more accurate than the well known molecular field approximation (MFA). It is shown that in the first case the magnitude of MCE is raised whereas in the second one the MCE disappears due to compensation of the critical field renormalized with the magnetization.

  15. Zeroth order regular approximation approach to electric dipole moment interactions of the electron

    Science.gov (United States)

    Gaul, Konstantin; Berger, Robert

    2017-07-01

    A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  16. An intrinsic robust rank-one-approximation approach for currencyportfolio optimization

    Directory of Open Access Journals (Sweden)

    Hongxuan Huang

    2018-03-01

    Full Text Available A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem is formulated as the basic model, in which there are two types of variablesdescribing currency amounts in portfolios and the amount of each currency exchanged into another,respectively. Secondly, the rank one approximation problem and its variants are also formulated toapproximate a foreign exchange rate matrix, whose performance is measured by the Frobenius normor the 2-norm of a residual matrix. The intrinsic robustness of the rank one approximation is provedtogether with summarizing properties of the basic ROA problem and designing a modified powermethod to search for the virtual exchange rates hidden in a foreign exchange rate matrix. Thirdly,a technique for decision variables reduction is presented to attack the currency portfolio optimization.The reduced formulation is referred to as the ROA model, which keeps only variables describingcurrency amounts in portfolios. The optimal solution to the ROA model also induces a feasible solutionto the basic model of the currency portfolio problem by integrating forex operations from the ROAmodel with practical forex rates. Finally, numerical examples are presented to verify the feasibility ande ciency of the intrinsic robust rank one approximation approach. They also indicate that there existsan objective measure for evaluating and optimizing currency portfolios over time, which is related tothe virtual standard currency and independent of any real currency selected specially for measurement.

  17. Advanced spot quality analysis in two-colour microarray experiments

    Directory of Open Access Journals (Sweden)

    Vetter Guillaume

    2008-09-01

    Full Text Available Abstract Background Image analysis of microarrays and, in particular, spot quantification and spot quality control, is one of the most important steps in statistical analysis of microarray data. Recent methods of spot quality control are still in early age of development, often leading to underestimation of true positive microarray features and, consequently, to loss of important biological information. Therefore, improving and standardizing the statistical approaches of spot quality control are essential to facilitate the overall analysis of microarray data and subsequent extraction of biological information. Findings We evaluated the performance of two image analysis packages MAIA and GenePix (GP using two complementary experimental approaches with a focus on the statistical analysis of spot quality factors. First, we developed control microarrays with a priori known fluorescence ratios to verify the accuracy and precision of the ratio estimation of signal intensities. Next, we developed advanced semi-automatic protocols of spot quality evaluation in MAIA and GP and compared their performance with available facilities of spot quantitative filtering in GP. We evaluated these algorithms for standardised spot quality analysis in a whole-genome microarray experiment assessing well-characterised transcriptional modifications induced by the transcription regulator SNAI1. Using a set of RT-PCR or qRT-PCR validated microarray data, we found that the semi-automatic protocol of spot quality control we developed with MAIA allowed recovering approximately 13% more spots and 38% more differentially expressed genes (at FDR = 5% than GP with default spot filtering conditions. Conclusion Careful control of spot quality characteristics with advanced spot quality evaluation can significantly increase the amount of confident and accurate data resulting in more meaningful biological conclusions.

  18. A systems biology approach to construct the gene regulatory network of systemic inflammation via microarray and databases mining

    Directory of Open Access Journals (Sweden)

    Lan Chung-Yu

    2008-09-01

    Full Text Available Abstract Background Inflammation is a hallmark of many human diseases. Elucidating the mechanisms underlying systemic inflammation has long been an important topic in basic and clinical research. When primary pathogenetic events remains unclear due to its immense complexity, construction and analysis of the gene regulatory network of inflammation at times becomes the best way to understand the detrimental effects of disease. However, it is difficult to recognize and evaluate relevant biological processes from the huge quantities of experimental data. It is hence appealing to find an algorithm which can generate a gene regulatory network of systemic inflammation from high-throughput genomic studies of human diseases. Such network will be essential for us to extract valuable information from the complex and chaotic network under diseased conditions. Results In this study, we construct a gene regulatory network of inflammation using data extracted from the Ensembl and JASPAR databases. We also integrate and apply a number of systematic algorithms like cross correlation threshold, maximum likelihood estimation method and Akaike Information Criterion (AIC on time-lapsed microarray data to refine the genome-wide transcriptional regulatory network in response to bacterial endotoxins in the context of dynamic activated genes, which are regulated by transcription factors (TFs such as NF-κB. This systematic approach is used to investigate the stochastic interaction represented by the dynamic leukocyte gene expression profiles of human subject exposed to an inflammatory stimulus (bacterial endotoxin. Based on the kinetic parameters of the dynamic gene regulatory network, we identify important properties (such as susceptibility to infection of the immune system, which may be useful for translational research. Finally, robustness of the inflammatory gene network is also inferred by analyzing the hubs and "weak ties" structures of the gene network

  19. Single-gene testing combined with single nucleotide polymorphism microarray preimplantation genetic diagnosis for aneuploidy: a novel approach in optimizing pregnancy outcome.

    Science.gov (United States)

    Brezina, Paul R; Benner, Andrew; Rechitsky, Svetlana; Kuliev, Anver; Pomerantseva, Ekaterina; Pauling, Dana; Kearns, William G

    2011-04-01

    To describe a method of amplifying DNA from blastocyst trophectoderm cells (two or three cells) and simultaneously performing 23-chromosome single nucleotide polymorphism microarrays and single-gene preimplantation genetic diagnosis. Case report. IVF clinic and preimplantation genetic diagnostic centers. A 36-year-old woman, gravida 2, para 1011, and her husband who both were carriers of GM(1) gangliosidosis. The couple wished to proceed with microarray analysis for aneuploidy detection coupled with DNA sequencing for GM(1) gangliosidosis. An IVF cycle was performed. Ten blastocyst-stage embryos underwent trophectoderm biopsy. Twenty-three-chromosome microarray analysis for aneuploidy and specific DNA sequencing for GM(1) gangliosidosis mutations were performed. Viable pregnancy. After testing, elective single embryo transfer was performed followed by an intrauterine pregnancy with documented fetal cardiac activity by ultrasound. Twenty-three-chromosome microarray analysis for aneuploidy detection and single-gene evaluation via specific DNA sequencing and linkage analysis are used for preimplantation diagnosis for single-gene disorders and aneuploidy. Because of the minimal amount of genetic material obtained from the day 3 to 5 embryos (up to 6 pg), these modalities have been used in isolation of each other. The use of preimplantation genetic diagnosis for aneuploidy coupled with testing for single-gene disorders via trophectoderm biopsy is a novel approach to maximize pregnancy outcomes. Although further investigation is warranted, preimplantation genetic diagnosis for aneuploidy and single-gene testing seem destined to be used increasingly to optimize ultimate pregnancy success. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  20. Self-energy-modified Poisson-Nernst-Planck equations: WKB approximation and finite-difference approaches.

    Science.gov (United States)

    Xu, Zhenli; Ma, Manman; Liu, Pei

    2014-07-01

    We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.

  1. Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.

    Science.gov (United States)

    Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole

    2015-05-12

    Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.

  2. Statistical comparison of a hybrid approach with approximate and exact inference models for Fusion 2+

    Science.gov (United States)

    Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew

    2007-04-01

    One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.

  3. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set of $\\sim$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $\\Omega_m, w_0, \\alpha$ and $\\beta$ and a magnitude offset parameter, with no systematics we obtain $\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$ (a $\\sim11$% 1$\\sigma$ uncertainty) using the Tripp metric and $\\Delta(w_0) = -0.055\\pm0.068$ (a $\\sim7$% 1$\\sigma$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $\\Delta(w_0) = -0.062\\pm0.132$ (a $\\sim14$% 1$\\sigma$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $w_0$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.

  4. A variational approach to operator and matrix Pade approximation. Applications to potential scattering and field theory

    International Nuclear Information System (INIS)

    Mery, P.

    1977-01-01

    The operator and matrix Pade approximation are defined. The fact that these approximants can be derived from the Schwinger variational principle is emphasized. In potential theory, using this variational aspect it is shown that the matrix Pade approximation allow to reproduce the exact solution of the Lippman-Schwinger equation with any required accuracy taking only into account the knowledge of the first two coefficients in the Born expansion. The deep analytic structure of this variational matrix Pade approximation (hyper Pade approximation) is discussed

  5. A novel approach for choosing summary statistics in approximate Bayesian computation.

    Science.gov (United States)

    Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas

    2012-11-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.

  6. Analytical approaches to the determination of spin-dependent parton distribution functions at NNLO approximation

    Science.gov (United States)

    Salajegheh, Maral; Nejad, S. Mohammad Moosavi; Khanpour, Hamzeh; Tehrani, S. Atashbar

    2018-05-01

    In this paper, we present SMKA18 analysis, which is a first attempt to extract the set of next-to-next-leading-order (NNLO) spin-dependent parton distribution functions (spin-dependent PDFs) and their uncertainties determined through the Laplace transform technique and Jacobi polynomial approach. Using the Laplace transformations, we present an analytical solution for the spin-dependent Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations at NNLO approximation. The results are extracted using a wide range of proton g1p(x ,Q2) , neutron g1n(x ,Q2) , and deuteron g1d(x ,Q2) spin-dependent structure functions data set including the most recent high-precision measurements from COMPASS16 experiments at CERN, which are playing an increasingly important role in global spin-dependent fits. The careful estimations of uncertainties have been done using the standard Hessian error propagation. We will compare our results with the available spin-dependent inclusive deep inelastic scattering data set and other results for the spin-dependent PDFs in literature. The results obtained for the spin-dependent PDFs as well as spin-dependent structure functions are clearly explained both in the small and large values of x .

  7. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    Science.gov (United States)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  8. A conceptual approach to approximate tree root architecture in infinite slope models

    Science.gov (United States)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  9. Approximate Receding Horizon Approach for Markov Decision Processes: Average Award Case

    National Research Council Canada - National Science Library

    Chang, Hyeong S; Marcus, Steven I

    2002-01-01

    ...) with countable state space, finite action space, and bounded rewards that uses an approximate solution of a fixed finite-horizon sub-MDP of a given infinite-horizon MDP to create a stationary policy...

  10. Operator approach to effective medium theory to overcome a breakdown of Maxwell Garnett approximation

    DEFF Research Database (Denmark)

    Popov, Vladislav; Lavrinenko, Andrei; Novitsky, Andrey

    2016-01-01

    that the zeroth-, first-, and second-order approximations of the operator effective medium theory correspond to electric dipoles, chirality, and magnetic dipoles plus electric quadrupoles, respectively. We discover that the spatially dispersive bianisotropic effective medium obtained in the second...

  11. Polyadenylation state microarray (PASTA) analysis.

    Science.gov (United States)

    Beilharz, Traude H; Preiss, Thomas

    2011-01-01

    Nearly all eukaryotic mRNAs terminate in a poly(A) tail that serves important roles in mRNA utilization. In the cytoplasm, the poly(A) tail promotes both mRNA stability and translation, and these functions are frequently regulated through changes in tail length. To identify the scope of poly(A) tail length control in a transcriptome, we developed the polyadenylation state microarray (PASTA) method. It involves the purification of mRNA based on poly(A) tail length using thermal elution from poly(U) sepharose, followed by microarray analysis of the resulting fractions. In this chapter we detail our PASTA approach and describe some methods for bulk and mRNA-specific poly(A) tail length measurements of use to monitor the procedure and independently verify the microarray data.

  12. A reductionist approach to extract robust molecular markers from microarray data series - Isolating markers to track osseointegration.

    Science.gov (United States)

    Barik, Anwesha; Banerjee, Satarupa; Dhara, Santanu; Chakravorty, Nishant

    2017-04-01

    Complexities in the full genome expression studies hinder the extraction of tracker genes to analyze the course of biological events. In this study, we demonstrate the applications of supervised machine learning methods to reduce the irrelevance in microarray data series and thereby extract robust molecular markers to track biological processes. The methodology has been illustrated by analyzing whole genome expression studies on bone-implant integration (ossointegration). Being a biological process, osseointegration is known to leave a trail of genetic footprint during the course. In spite of existence of enormous amount of raw data in public repositories, researchers still do not have access to a panel of genes that can definitively track osseointegration. The results from our study revealed panels comprising of matrix metalloproteinases and collagen genes were able to track osseointegration on implant surfaces (MMP9 and COL1A2 on micro-textured; MMP12 and COL6A3 on superimposed nano-textured surfaces) with 100% classification accuracy, specificity and sensitivity. Further, our analysis showed the importance of the progression of the duration in establishment of the mechanical connection at bone-implant surface. The findings from this study are expected to be useful to researchers investigating osseointegration of novel implant materials especially at the early stage. The methodology demonstrated can be easily adapted by scientists in different fields to analyze large databases for other biological processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Microarray-based approach identifies microRNAs and their target functional patterns in polycystic kidney disease

    Directory of Open Access Journals (Sweden)

    Boehn Susanne NE

    2008-12-01

    Full Text Available Abstract Background MicroRNAs (miRNAs play key roles in mammalian gene expression and several cellular processes, including differentiation, development, apoptosis and cancer pathomechanisms. Recently the biological importance of primary cilia has been recognized in a number of human genetic diseases. Numerous disorders are related to cilia dysfunction, including polycystic kidney disease (PKD. Although involvement of certain genes and transcriptional networks in PKD development has been shown, not much is known how they are regulated molecularly. Results Given the emerging role of miRNAs in gene expression, we explored the possibilities of miRNA-based regulations in PKD. Here, we analyzed the simultaneous expression changes of miRNAs and mRNAs by microarrays. 935 genes, classified into 24 functional categories, were differentially regulated between PKD and control animals. In parallel, 30 miRNAs were differentially regulated in PKD rats: our results suggest that several miRNAs might be involved in regulating genetic switches in PKD. Furthermore, we describe some newly detected miRNAs, miR-31 and miR-217, in the kidney which have not been reported previously. We determine functionally related gene sets, or pathways to reveal the functional correlation between differentially expressed mRNAs and miRNAs. Conclusion We find that the functional patterns of predicted miRNA targets and differentially expressed mRNAs are similar. Our results suggest an important role of miRNAs in specific pathways underlying PKD.

  14. Analytical approaches for the approximate solution of a nonlinear fractional ordinary differential equation

    International Nuclear Information System (INIS)

    Basak, K C; Ray, P C; Bera, R K

    2009-01-01

    The aim of the present analysis is to apply the Adomian decomposition method and He's variational method for the approximate analytical solution of a nonlinear ordinary fractional differential equation. The solutions obtained by the above two methods have been numerically evaluated and presented in the form of tables and also compared with the exact solution. It was found that the results obtained by the above two methods are in excellent agreement with the exact solution. Finally, a surface plot of the approximate solutions of the fractional differential equation by the above two methods is drawn for 0≤t≤2 and 1<α≤2.

  15. A Volterra series approach to the approximation of stochastic nonlinear dynamics

    NARCIS (Netherlands)

    Wouw, van de N.; Nijmeijer, H.; Campen, van D.H.

    2002-01-01

    A response approximation method for stochastically excited, nonlinear, dynamic systems is presented. Herein, the output of the nonlinear system isapproximated by a finite-order Volterra series. The original nonlinear system is replaced by a bilinear system in order to determine the kernels of this

  16. Novel approach to select genes from RMA normalized microarray data using functional hearing tests in aging mice

    Science.gov (United States)

    D'Souza, Mary; Zhu, Xiaoxia; Frisina, Robert D.

    2008-01-01

    Presbycusis – age-related hearing loss – is the number one communicative disorder and one of the top three chronic medical condition of our aged population. High-throughput technologies potentially can be used to identify differentially expressed genes that may be better diagnostic and therapeutic targets for sensory and neural disorders. Here we analyzed gene expression for a set of GABA receptors in the cochlea of aging CBA mice using the Affymetrix GeneChip MOE430A. Functional phenotypic hearing measures were made, including auditory brainstem response (ABR) thresholds and distortion-product otoacoustic emission (DPOAE) amplitudes (four age groups). Four specific criteria were used to assess gene expression changes from RMA normalized microarray data (40 replicates). Linear regression models were used to fit the neurophysiological hearing measurements to probe-set expression profiles. These data were first subjected to one-way ANOVA, and then linear regression was performed. In addition, the log signal ratio was converted to fold change, and selected gene expression changes were confirmed by relative real-time PCR. Major findings: expression of GABA-A receptor subunit α6 was upregulated with age and hearing loss, whereas subunit α1 was repressed. In addition, GABA-A receptor associated protein like-1 and GABA-A receptor associated protein like-2 were strongly downregulated with age and hearing impairment. Lastly, gene expression measures were correlated with pathway/network relationships relevant to the inner ear using Pathway Architect, to identify key pathways consistent with the gene expression changes observed. PMID:18455804

  17. A Padé approximant approach to two kinds of transcendental equations with applications in physics

    International Nuclear Information System (INIS)

    Luo, Qiang; Wang, Zhidan; Han, Jiurong

    2015-01-01

    In this paper, we obtain the analytical solutions of two kinds of transcendental equations with numerous applications in college physics by means of the Lagrange inversion theorem. Afterwards we rewrite them in the form of a ratio of rational polynomials by a second-order Padé approximant from a practical and instructional perspective. Our method is illustrated in a pedagogical manner for the benefit of students at the undergraduate level. The approximate formulas introduced in the paper can be applied to abundant examples in physics textbooks, such as Fraunhofer single-slit diffraction, Wien’s displacement law, and the Schrödinger equation with single- or double-δ potential. These formulas, consequently, can reach considerable accuracies according to the numerical results; therefore, they promise to act as valuable ingredients in the standard teaching curriculum. (paper)

  18. Genomics approaches in the understanding of Entamoeba ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-04-20

    Apr 20, 2009 ... Here, we reviewed recent advances in the efforts to understand ... expression regulation in E. histolytica by using genomic approaches based on microarray technology ... tic abscesses that result in approximately 70,000 -.

  19. An intrinsic robust rank-one-approximation approach for currencyportfolio optimization

    OpenAIRE

    Hongxuan Huang; Zhengjun Zhang

    2018-01-01

    A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity) properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA) approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem ...

  20. Hybrid Approximate Dynamic Programming Approach for Dynamic Optimal Energy Flow in the Integrated Gas and Power Systems

    DEFF Research Database (Denmark)

    Shuai, Hang; Ai, Xiaomeng; Wen, Jinyu

    2017-01-01

    This paper proposes a hybrid approximate dynamic programming (ADP) approach for the multiple time-period optimal power flow in integrated gas and power systems. ADP successively solves Bellman's equation to make decisions according to the current state of the system. So, the updated near future...

  1. A New Approach for the Approximations of Solutions to a Common Fixed Point Problem in Metric Fixed Point Theory

    Directory of Open Access Journals (Sweden)

    Ishak Altun

    2016-01-01

    Full Text Available We provide sufficient conditions for the existence of a unique common fixed point for a pair of mappings T,S:X→X, where X is a nonempty set endowed with a certain metric. Moreover, a numerical algorithm is presented in order to approximate such solution. Our approach is different to the usual used methods in the literature.

  2. The EADGENE Microarray Data Analysis Workshop

    DEFF Research Database (Denmark)

    de Koning, Dirk-Jan; Jaffrézic, Florence; Lund, Mogens Sandø

    2007-01-01

    Microarray analyses have become an important tool in animal genomics. While their use is becoming widespread, there is still a lot of ongoing research regarding the analysis of microarray data. In the context of a European Network of Excellence, 31 researchers representing 14 research groups from...... 10 countries performed and discussed the statistical analyses of real and simulated 2-colour microarray data that were distributed among participants. The real data consisted of 48 microarrays from a disease challenge experiment in dairy cattle, while the simulated data consisted of 10 microarrays...... statistical weights, to omitting a large number of spots or omitting entire slides. Surprisingly, these very different approaches gave quite similar results when applied to the simulated data, although not all participating groups analysed both real and simulated data. The workshop was very successful...

  3. An approximate dynamic programming approach to resource management in multi-cloud scenarios

    Science.gov (United States)

    Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo

    2017-03-01

    The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.

  4. Approximate entropy: a new evaluation approach of mental workload under multitask conditions

    Science.gov (United States)

    Yao, Lei; Li, Xiaoling; Wang, Wei; Dong, Yuanzhe; Jiang, Ying

    2014-04-01

    There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.

  5. Development of Probabilistic and Possebilistic Approaches to Approximate Reasoning and Its Applications

    Science.gov (United States)

    1989-10-31

    fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of

  6. Metric learning for DNA microarray data analysis

    International Nuclear Information System (INIS)

    Takeuchi, Ichiro; Nakagawa, Masao; Seto, Masao

    2009-01-01

    In many microarray studies, gene set selection is an important preliminary step for subsequent main task such as tumor classification, cancer subtype identification, etc. In this paper, we investigate the possibility of using metric learning as an alternative to gene set selection. We develop a simple metric learning algorithm aiming to use it for microarray data analysis. Exploiting a property of the algorithm, we introduce a novel approach for extending the metric learning to be adaptive. We apply the algorithm to previously studied microarray data on malignant lymphoma subtype identification.

  7. Top-Down Chemoenzymatic Approach to Synthesizing Diverse High-Mannose N-Glycans and Related Neoglycoproteins for Carbohydrate Microarray Analysis.

    Science.gov (United States)

    Toonstra, Christian; Wu, Lisa; Li, Chao; Wang, Denong; Wang, Lai-Xi

    2018-05-22

    High-mannose-type N-glycans are an important component of neutralizing epitopes on HIV-1 envelope glycoprotein gp120. They also serve as signals for protein folding, trafficking, and degradation in protein quality control. A number of lectins and antibodies recognize high-mannose-type N-glycans, and glycan array technology has provided an avenue to probe these oligomannose-specific proteins. We describe in this paper a top-down chemoenzymatic approach to synthesize a library of high-mannose N-glycans and related neoglycoproteins for glycan microarray analysis. The method involves the sequential enzymatic trimming of two readily available natural N-glycans, the Man 9 GlcNAc 2 Asn prepared from soybean flour and the sialoglycopeptide (SGP) isolated from chicken egg yolks, coupled with chromatographic separation to obtain a collection of a full range of natural high-mannose N-glycans. The Asn-linked N-glycans were conjugated to bovine serum albumin (BSA) to provide neoglycoproteins containing the oligomannose moieties. The glycoepitopes displayed were characterized using an array of glycan-binding proteins, including the broadly virus-neutralizing agents, glycan-specific antibody 2G12, Galanthus nivalis lectin (GNA), and Narcissus pseudonarcissus lectin (NPA).

  8. Unraveling the rat blood genome-wide transcriptome after oral administration of lavender oil by a two-color dye-swap DNA microarray approach

    Directory of Open Access Journals (Sweden)

    Motohide Hori

    2016-06-01

    Full Text Available Lavender oil (LO is a commonly used essential oil in aromatherapy as non-traditional medicine. With an aim to demonstrate LO effects on the body, we have recently established an animal model investigating the influence of orally administered LO in rat tissues, genome-wide. In this brief, we investigate the effect of LO ingestion in the blood of rat. Rats were administered LO at usual therapeutic dose (5 mg/kg in humans, and following collection of the venous blood from the heart and extraction of total RNA, the differentially expressed genes were screened using a 4 × 44-K whole-genome rat chip (Agilent microarray platform; Agilent Technologies, Palo Alto, CA, USA in conjunction with a two-color dye-swap approach. A total of 834 differentially expressed genes in the blood were identified: 362 up-regulated and 472 down-regulated. These genes were functionally categorized using bioinformatics tools. The gene expression inventory of rat blood transcriptome under LO, a first report, has been deposited into the Gene Expression Omnibus (GEO: GSE67499. The data will be a valuable resource in examining the effects of natural products, and which could also serve as a human model for further functional analysis and investigation.

  9. MARS: Microarray analysis, retrieval, and storage system

    Directory of Open Access Journals (Sweden)

    Scheideler Marcel

    2005-04-01

    Full Text Available Abstract Background Microarray analysis has become a widely used technique for the study of gene-expression patterns on a genomic scale. As more and more laboratories are adopting microarray technology, there is a need for powerful and easy to use microarray databases facilitating array fabrication, labeling, hybridization, and data analysis. The wealth of data generated by this high throughput approach renders adequate database and analysis tools crucial for the pursuit of insights into the transcriptomic behavior of cells. Results MARS (Microarray Analysis and Retrieval System provides a comprehensive MIAME supportive suite for storing, retrieving, and analyzing multi color microarray data. The system comprises a laboratory information management system (LIMS, a quality control management, as well as a sophisticated user management system. MARS is fully integrated into an analytical pipeline of microarray image analysis, normalization, gene expression clustering, and mapping of gene expression data onto biological pathways. The incorporation of ontologies and the use of MAGE-ML enables an export of studies stored in MARS to public repositories and other databases accepting these documents. Conclusion We have developed an integrated system tailored to serve the specific needs of microarray based research projects using a unique fusion of Web based and standalone applications connected to the latest J2EE application server technology. The presented system is freely available for academic and non-profit institutions. More information can be found at http://genome.tugraz.at.

  10. Fibre optic microarrays.

    Science.gov (United States)

    Walt, David R

    2010-01-01

    This tutorial review describes how fibre optic microarrays can be used to create a variety of sensing and measurement systems. This review covers the basics of optical fibres and arrays, the different microarray architectures, and describes a multitude of applications. Such arrays enable multiplexed sensing for a variety of analytes including nucleic acids, vapours, and biomolecules. Polymer-coated fibre arrays can be used for measuring microscopic chemical phenomena, such as corrosion and localized release of biochemicals from cells. In addition, these microarrays can serve as a substrate for fundamental studies of single molecules and single cells. The review covers topics of interest to chemists, biologists, materials scientists, and engineers.

  11. Dynamic and static correlation functions in the inhomogeneous Hartree-Fock-state approach with random-phase-approximation fluctuations

    International Nuclear Information System (INIS)

    Lorenzana, J.; Grynberg, M.D.; Yu, L.; Yonemitsu, K.; Bishop, A.R.

    1992-11-01

    The ground state energy, and static and dynamic correlation functions are investigated in the inhomogeneous Hartree-Fock (HF) plus random phase approximation (RPA) approach applied to a one-dimensional spinless fermion model showing self-trapped doping states at the mean field level. Results are compared with homogeneous HF and exact diagonalization. RPA fluctuations added to the generally inhomogeneous HF ground state allows the computation of dynamical correlation functions that compare well with exact diagonalization results. The RPA correction to the ground state energy agrees well with the exact results at strong and weak coupling limits. We also compare it with a related quasi-boson approach. The instability towards self-trapped behaviour is signaled by a RPA mode with frequency approaching zero. (author). 21 refs, 10 figs

  12. Calculation of static characteristics of linear step motors for control rod drives of nuclear reactors - an approximate approach

    International Nuclear Information System (INIS)

    Khan, S.H.; Ivanov, A.A.

    1993-01-01

    This paper describes an approximate method for calculating the static characteristics of linear step motors (LSM), being developed for control rod drives (CRD) in large nuclear reactors. The static characteristic of such an LSM which is given by the variation of electromagnetic force with armature displacement determines the motor performance in its standing and dynamic modes. The approximate method of calculation of these characteristics is based on the permeance analysis method applied to the phase magnetic circuit of LSM. This is a simple, fast and efficient analytical approach which gives satisfactory results for small stator currents and weak iron saturation, typical to the standing mode of operation of LSM. The method is validated by comparing theoretical results with experimental ones. (Author)

  13. Radioactive cDNA microarray in neurospsychiatry

    International Nuclear Information System (INIS)

    Choe, Jae Gol; Shin, Kyung Ho; Lee, Min Soo; Kim, Meyoung Kon

    2003-01-01

    Microarray technology allows the simultaneous analysis of gene expression patterns of thousands of genes, in a systematic fashion, under a similar set of experimental conditions, thus making the data highly comparable. In some cases arrays are used simply as a primary screen leading to downstream molecular characterization of individual gene candidates. In other cases, the goal of expression profiling is to begin to identify complex regulatory networks underlying developmental processes and disease states. Microarrays were originally used with cell lines or other simple model systems. More recently, microarrays have been used in the analysis of more complex biological tissues including neural systems and the brain. The application of cDNA arrays in neuropsychiatry has lagged behind other fields for a number of reasons. These include a requirement for a large amount of input probe RNA in fluorescent-glass based array systems and the cellular complexity introduced by multicellular brain and neural tissues. An additional factor that impacts the general use of microarrays in neuropsychiatry is the lack of availability of sequenced clone sets from model systems. While human cDNA clones have been widely available, high quality rat, mouse, and drosophilae, among others are just becoming widely available. A final factor in the application of cDNA microarrays in neuropsychiatry is cost of commercial arrays. As academic microarray facilitates become more commonplace custom made arrays will become more widely available at a lower cost allowing more widespread applications. In summary, microarray technology is rapidly having an impact on many areas of biomedical research. Radioisotope-nylon based microarrays offer alternatives that may in some cases be more sensitive, flexible, inexpensive, and universal as compared to other array formats, such as fluorescent-glass arrays. In some situations of limited RNA or exotic species, radioactive membrane microarrays may be the most

  14. Radioactive cDNA microarray in neurospsychiatry

    Energy Technology Data Exchange (ETDEWEB)

    Choe, Jae Gol; Shin, Kyung Ho; Lee, Min Soo; Kim, Meyoung Kon [Korea University Medical School, Seoul (Korea, Republic of)

    2003-02-01

    Microarray technology allows the simultaneous analysis of gene expression patterns of thousands of genes, in a systematic fashion, under a similar set of experimental conditions, thus making the data highly comparable. In some cases arrays are used simply as a primary screen leading to downstream molecular characterization of individual gene candidates. In other cases, the goal of expression profiling is to begin to identify complex regulatory networks underlying developmental processes and disease states. Microarrays were originally used with cell lines or other simple model systems. More recently, microarrays have been used in the analysis of more complex biological tissues including neural systems and the brain. The application of cDNA arrays in neuropsychiatry has lagged behind other fields for a number of reasons. These include a requirement for a large amount of input probe RNA in fluorescent-glass based array systems and the cellular complexity introduced by multicellular brain and neural tissues. An additional factor that impacts the general use of microarrays in neuropsychiatry is the lack of availability of sequenced clone sets from model systems. While human cDNA clones have been widely available, high quality rat, mouse, and drosophilae, among others are just becoming widely available. A final factor in the application of cDNA microarrays in neuropsychiatry is cost of commercial arrays. As academic microarray facilitates become more commonplace custom made arrays will become more widely available at a lower cost allowing more widespread applications. In summary, microarray technology is rapidly having an impact on many areas of biomedical research. Radioisotope-nylon based microarrays offer alternatives that may in some cases be more sensitive, flexible, inexpensive, and universal as compared to other array formats, such as fluorescent-glass arrays. In some situations of limited RNA or exotic species, radioactive membrane microarrays may be the most

  15. Genomic DNA Enrichment Using Sequence Capture Microarrays: a Novel Approach to Discover Sequence Nucleotide Polymorphisms (SNP) in Brassica napus L

    Science.gov (United States)

    Clarke, Wayne E.; Parkin, Isobel A.; Gajardo, Humberto A.; Gerhardt, Daniel J.; Higgins, Erin; Sidebottom, Christine; Sharpe, Andrew G.; Snowdon, Rod J.; Federico, Maria L.; Iniguez-Luy, Federico L.

    2013-01-01

    Targeted genomic selection methodologies, or sequence capture, allow for DNA enrichment and large-scale resequencing and characterization of natural genetic variation in species with complex genomes, such as rapeseed canola (Brassica napus L., AACC, 2n=38). The main goal of this project was to combine sequence capture with next generation sequencing (NGS) to discover single nucleotide polymorphisms (SNPs) in specific areas of the B. napus genome historically associated (via quantitative trait loci –QTL– analysis) to traits of agronomical and nutritional importance. A 2.1 million feature sequence capture platform was designed to interrogate DNA sequence variation across 47 specific genomic regions, representing 51.2 Mb of the Brassica A and C genomes, in ten diverse rapeseed genotypes. All ten genotypes were sequenced using the 454 Life Sciences chemistry and to assess the effect of increased sequence depth, two genotypes were also sequenced using Illumina HiSeq chemistry. As a result, 589,367 potentially useful SNPs were identified. Analysis of sequence coverage indicated a four-fold increased representation of target regions, with 57% of the filtered SNPs falling within these regions. Sixty percent of discovered SNPs corresponded to transitions while 40% were transversions. Interestingly, fifty eight percent of the SNPs were found in genic regions while 42% were found in intergenic regions. Further, a high percentage of genic SNPs was found in exons (65% and 64% for the A and C genomes, respectively). Two different genotyping assays were used to validate the discovered SNPs. Validation rates ranged from 61.5% to 84% of tested SNPs, underpinning the effectiveness of this SNP discovery approach. Most importantly, the discovered SNPs were associated with agronomically important regions of the B. napus genome generating a novel data resource for research and breeding this crop species. PMID:24312619

  16. DNA Microarray Technology

    Science.gov (United States)

    Skip to main content DNA Microarray Technology Enter Search Term(s): Español Research Funding An Overview Bioinformatics Current Grants Education and Training Funding Extramural Research News Features Funding Divisions Funding ...

  17. DNA Microarray Technology; TOPICAL

    International Nuclear Information System (INIS)

    WERNER-WASHBURNE, MARGARET; DAVIDSON, GEORGE S.

    2002-01-01

    Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects

  18. Optimization of approximate decision rules relative to number of misclassifications: Comparison of greedy and dynamic programming approaches

    KAUST Repository

    Amin, Talha

    2013-01-01

    In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.

  19. A New Approach to Rational Discrete-Time Approximations to Continuous-Time Fractional-Order Systems

    OpenAIRE

    Matos , Carlos; Ortigueira , Manuel ,

    2012-01-01

    Part 10: Signal Processing; International audience; In this paper a new approach to rational discrete-time approximations to continuous fractional-order systems of the form 1/(sα+p) is proposed. We will show that such fractional-order LTI system can be decomposed into sub-systems. One has the classic behavior and the other is similar to a Finite Impulse Response (FIR) system. The conversion from continuous-time to discrete-time systems will be done using the Laplace transform inversion integr...

  20. Random-phase-approximation approach to optical and magnetic excitations in the two-dimensional multiband Hubbard model

    International Nuclear Information System (INIS)

    Yonemitsu, K.; Bishop, A.R.

    1992-01-01

    As a convenient qualitative approach to strongly correlated electronic systems, an inhomogeneous Hartree-Fock plus random-phase approximation is applied to response functions for the two-dimensional multiband Hubbard model for cuprate superconductors. A comparison of the results with those obtained by exact diagonalization by Wagner, Hanke, and Scalapino [Phys. Rev. B 43, 10 517 (1991)] shows that overall structures in optical and magnetic particle-hole excitation spectra are well reproduced by this method. This approach is computationally simple, retains conceptual clarity, and can be calibrated by comparison with exact results on small systems. Most importantly, it is easily extended to larger systems and straightforward to incorporate additional terms in the Hamiltonian, such as electron-phonon interactions, which may play a crucial role in high-temperature superconductivity

  1. Optical properties of non-spherical desert dust particles in the terrestrial infrared – An asymptotic approximation approach

    International Nuclear Information System (INIS)

    Klüser, Lars; Di Biagio, Claudia; Kleiber, Paul D.; Formenti, Paola; Grassian, Vicki H.

    2016-01-01

    Optical properties (extinction efficiency, single scattering albedo, asymmetry parameter and scattering phase function) of five different desert dust minerals have been calculated with an asymptotic approximation approach (AAA) for non-spherical particles. The AAA method combines Rayleigh-limit approximations with an asymptotic geometric optics solution in a simple and straightforward formulation. The simulated extinction spectra have been compared with classical Lorenz–Mie calculations as well as with laboratory measurements of dust extinction. This comparison has been done for single minerals and with bulk dust samples collected from desert environments. It is shown that the non-spherical asymptotic approximation improves the spectral extinction pattern, including position of the extinction peaks, compared to the Lorenz–Mie calculations for spherical particles. Squared correlation coefficients from the asymptotic approach range from 0.84 to 0.96 for the mineral components whereas the corresponding numbers for Lorenz–Mie simulations range from 0.54 to 0.85. Moreover the blue shift typically found in Lorenz–Mie results is not present in the AAA simulations. The comparison of spectra simulated with the AAA for different shape assumptions suggests that the differences mainly stem from the assumption of the particle shape and not from the formulation of the method itself. It has been shown that the choice of particle shape strongly impacts the quality of the simulations. Additionally, the comparison of simulated extinction spectra with bulk dust measurements indicates that within airborne dust the composition may be inhomogeneous over the range of dust particle sizes, making the calculation of reliable radiative properties of desert dust even more complex. - Highlights: • A fast and simple method for estimating optical properties of dust. • Can be used with non-spherical particles of arbitrary size distributions. • Comparison with Mie simulations and

  2. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  3. Integrative missing value estimation for microarray data.

    Science.gov (United States)

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  4. Integrative missing value estimation for microarray data

    Directory of Open Access Journals (Sweden)

    Zhou Xianghong

    2006-10-01

    Full Text Available Abstract Background Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. Results We present the integrative Missing Value Estimation method (iMISS by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS imputation algorithm by up to 15% improvement in our benchmark tests. Conclusion We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  5. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  6. Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions.

    Science.gov (United States)

    Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin

    2009-03-01

    Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.

  7. Analysis of Approximations and Aperture Distortion for 3D Migration of Bistatic Radar Data with the Two-Step Approach

    Directory of Open Access Journals (Sweden)

    Zanzi Luigi

    2010-01-01

    Full Text Available The two-step approach is a fast algorithm for 3D migration originally introduced to process zero-offset seismic data. Its application to monostatic GPR (Ground Penetrating Radar data is straightforward. A direct extension of the algorithm for the application to bistatic radar data is possible provided that the TX-RX azimuth is constant. As for the zero-offset case, the two-step operator is exactly equivalent to the one-step 3D operator for a constant velocity medium and is an approximation of the one-step 3D operator for a medium where the velocity varies vertically. Two methods are explored for handling a heterogeneous medium; both are suitable for the application of the two-step approach, and they are compared in terms of accuracy of the final 3D operator. The aperture of the two-step operator is discussed, and a solution is proposed to optimize its shape. The analysis is of interest for any NDT application where the medium is expected to be heterogeneous, or where the antenna is not in direct contact with the medium (e.g., NDT of artworks, humanitarian demining, radar with air-launched antennas.

  8. A review of the theoretical and numerical approaches to modeling skyglow: Iterative approach to RTE, MSOS, and two-stream approximation

    International Nuclear Information System (INIS)

    Kocifaj, Miroslav

    2016-01-01

    The study of diffuse light of a night sky is undergoing a renaissance due to the development of inexpensive high performance computers which can significantly reduce the time needed for accurate numerical simulations. Apart from targeted field campaigns, numerical modeling appears to be one of the most attractive and powerful approaches for predicting the diffuse light of a night sky. However, computer-aided simulation of night-sky radiances over any territory and under arbitrary conditions is a complex problem that is difficult to solve. This study addresses three concepts for modeling the artificial light propagation through a turbid stratified atmosphere. Specifically, these are two-stream approximation, iterative approach to Radiative Transfer Equation (RTE) and Method of Successive Orders of Scattering (MSOS). The principles of the methods, their strengths and weaknesses are reviewed with respect to their implications for night-light modeling in different environments. - Highlights: • Three methods for modeling nightsky radiance are reviewed. • The two-stream approximation allows for rapid calculation of radiative fluxes. • The above approach is convenient for modeling large uniformly emitting areas. • SOS is applicable to heterogeneous deployment of well-separated cities or towns. • MSOS is generally CPU less-intensive than traditional 3D RTE.

  9. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  10. Measurements of Atomic Rayleigh Scattering Cross-Sections: A New Approach Based on Solid Angle Approximation and Geometrical Efficiency

    Science.gov (United States)

    Rao, D. V.; Takeda, T.; Itai, Y.; Akatsuka, T.; Seltzer, S. M.; Hubbell, J. H.; Cesareo, R.; Brunetti, A.; Gigante, G. E.

    Atomic Rayleigh scattering cross-sections for low, medium and high Z atoms are measured in vacuum using X-ray tube with a secondary target as an excitation source instead of radioisotopes. Monoenergetic Kα radiation emitted from the secondary target and monoenergetic radiation produced using two secondary targets with filters coupled to an X-ray tube are compared. The Kα radiation from the second target of the system is used to excite the sample. The background has been reduced considerably and the monochromacy is improved. Elastic scattering of Kα X-ray line energies of the secondary target by the sample is recorded with Hp Ge and Si (Li) detectors. A new approach is developed to estimate the solid angle approximation and geometrical efficiency for a system with experimental arrangement using X-ray tube and secondary target. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. The efficiency is larger because the X-ray fluorescent source acts as a converter. Experimental results based on this system are compared with theoretical estimates and good agreement is observed in between them.

  11. Electroweak radiative corrections to e+e-→WW→4 fermions in double-pole approximation -- the RACOONWW approach

    International Nuclear Information System (INIS)

    Denner, A.; Dittmaier, S.; Roth, M.; Wackeroth, D.

    2000-01-01

    We calculate the complete O(α) electroweak radiative corrections to e + e - →WW→4f in the electroweak Standard Model in the double-pole approximation. We give analytical results for the non-factorizable virtual corrections and express the factorizable virtual corrections in terms of the known corrections to on-shell W-pair production and W decay. The calculation of the bremsstrahlung corrections, i.e., the processes e + e - →4fγ in lowest order, is based on the full matrix elements. The matching of soft and collinear singularities between virtual and real corrections is done alternatively in two different ways, namely by using a subtraction method and by applying phase-space slicing. The O(α) corrections as well as higher-order initial-state photon radiation are implemented in the Monte Carlo generator RACOONWW. Numerical results of this program are presented for the W-pair-production cross section, angular and W-invariant-mass distributions at LEP2. We also discuss the intrinsic theoretical uncertainty of our approach

  12. "Harshlighting" small blemishes on microarrays

    Directory of Open Access Journals (Sweden)

    Wittkowski Knut M

    2005-03-01

    Full Text Available Abstract Background Microscopists are familiar with many blemishes that fluorescence images can have due to dust and debris, glass flaws, uneven distribution of fluids or surface coatings, etc. Microarray scans show similar artefacts, which affect the analysis, particularly when one tries to detect subtle changes. However, most blemishes are hard to find by the unaided eye, particularly in high-density oligonucleotide arrays (HDONAs. Results We present a method that harnesses the statistical power provided by having several HDONAs available, which are obtained under similar conditions except for the experimental factor. This method "harshlights" blemishes and renders them evident. We find empirically that about 25% of our chips are blemished, and we analyze the impact of masking them on screening for differentially expressed genes. Conclusion Experiments attempting to assess subtle expression changes should be carefully screened for blemishes on the chips. The proposed method provides investigators with a novel robust approach to improve the sensitivity of microarray analyses. By utilizing topological information to identify and mask blemishes prior to model based analyses, the method prevents artefacts from confounding the process of background correction, normalization, and summarization.

  13. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  14. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method.

    Science.gov (United States)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-21

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  15. Identification of Putative Ortholog Gene Blocks Involved in Gestant and Lactating Mammary Gland Development: A Rodent Cross-Species Microarray Transcriptomics Approach

    Science.gov (United States)

    Rodríguez-Cruz, Maricela; Coral-Vázquez, Ramón M.; Hernández-Stengele, Gabriel; Sánchez, Raúl; Salazar, Emmanuel; Sanchez-Muñoz, Fausto; Encarnación-Guevara, Sergio; Ramírez-Salcedo, Jorge

    2013-01-01

    The mammary gland (MG) undergoes functional and metabolic changes during the transition from pregnancy to lactation, possibly by regulation of conserved genes. The objective was to elucidate orthologous genes, chromosome clusters and putative conserved transcriptional modules during MG development. We analyzed expression of 22,000 transcripts using murine microarrays and RNA samples of MG from virgin, pregnant, and lactating rats by cross-species hybridization. We identified 521 transcripts differentially expressed; upregulated in early (78%) and midpregnancy (89%) and early lactation (64%), but downregulated in mid-lactation (61%). Putative orthologous genes were identified. We mapped the altered genes to orthologous chromosomal locations in human and mouse. Eighteen sets of conserved genes associated with key cellular functions were revealed and conserved transcription factor binding site search entailed possible coregulation among all eight block sets of genes. This study demonstrates that the use of heterologous array hybridization for screening of orthologous gene expression from rat revealed sets of conserved genes arranged in chromosomal order implicated in signaling pathways and functional ontology. Results demonstrate the utilization power of comparative genomics and prove the feasibility of using rodent microarrays to identification of putative coexpressed orthologous genes involved in the control of human mammary gland development. PMID:24288657

  16. Discovering biological progression underlying microarray samples.

    Directory of Open Access Journals (Sweden)

    Peng Qiu

    2011-04-01

    Full Text Available In biological systems that undergo processes such as differentiation, a clear concept of progression exists. We present a novel computational approach, called Sample Progression Discovery (SPD, to discover patterns of biological progression underlying microarray gene expression data. SPD assumes that individual samples of a microarray dataset are related by an unknown biological process (i.e., differentiation, development, cell cycle, disease progression, and that each sample represents one unknown point along the progression of that process. SPD aims to organize the samples in a manner that reveals the underlying progression and to simultaneously identify subsets of genes that are responsible for that progression. We demonstrate the performance of SPD on a variety of microarray datasets that were generated by sampling a biological process at different points along its progression, without providing SPD any information of the underlying process. When applied to a cell cycle time series microarray dataset, SPD was not provided any prior knowledge of samples' time order or of which genes are cell-cycle regulated, yet SPD recovered the correct time order and identified many genes that have been associated with the cell cycle. When applied to B-cell differentiation data, SPD recovered the correct order of stages of normal B-cell differentiation and the linkage between preB-ALL tumor cells with their cell origin preB. When applied to mouse embryonic stem cell differentiation data, SPD uncovered a landscape of ESC differentiation into various lineages and genes that represent both generic and lineage specific processes. When applied to a prostate cancer microarray dataset, SPD identified gene modules that reflect a progression consistent with disease stages. SPD may be best viewed as a novel tool for synthesizing biological hypotheses because it provides a likely biological progression underlying a microarray dataset and, perhaps more importantly, the

  17. Principles of gene microarray data analysis.

    Science.gov (United States)

    Mocellin, Simone; Rossi, Carlo Riccardo

    2007-01-01

    The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.

  18. Non linear shock wave propagation in heterogeneous fluids: a numerical approach beyond the parabolic approximation with application to sonic boom.

    Science.gov (United States)

    Dagrau, Franck; Coulouvrat, François; Marchiano, Régis; Héron, Nicolas

    2008-06-01

    Dassault Aviation as a civil aircraft manufacturer is studying the feasibility of a supersonic business jet with the target of an "acceptable" sonic boom at the ground level, and in particular in case of focusing. A sonic boom computational process has been performed, that takes into account meteorological effects and aircraft manoeuvres. Turn manoeuvres and aircraft acceleration create zones of convergence of rays (caustics) which are the place of sound amplification. Therefore two elements have to be evaluated: firstly the geometrical position of the caustics, and secondly the noise level in the neighbourhood of the caustics. The modelling of the sonic boom propagation is based essentially on the assumptions of geometrical acoustics. Ray tracing is obtained according to Fermat's principle as paths that minimise the propagation time between the source (the aircraft) and the receiver. Wave amplitude and time waveform result from the solution of the inviscid Burgers' equation written along each individual ray. The "age variable" measuring the cumulative nonlinear effects is linked to the ray tube area. Caustics are located as the place where the ray tube area vanishes. Since geometrical acoustics does not take into account diffraction effects, it breaks down in the neighbourhood of caustics where it would predict unphysical infinite pressure amplitude. The aim of this study is to describe an original method for computing the focused noise level. The approach involves three main steps that can be summarised as follows. The propagation equation is solved by a forward marching procedure split into three successive steps: linear propagation in a homogeneous medium, linear perturbation due to the weak heterogeneity of the medium, and non-linear effects. The first step is solved using an "exact" angular spectrum algorithm. Parabolic approximation is applied only for the weak perturbation due to the heterogeneities. Finally, non linear effects are performed by solving the

  19. cDNA microarray screening in food safety

    International Nuclear Information System (INIS)

    Roy, Sashwati; Sen, Chandan K.

    2006-01-01

    The cDNA microarray technology and related bioinformatics tools presents a wide range of novel application opportunities. The technology may be productively applied to address food safety. In this mini-review article, we present an update highlighting the late breaking discoveries that demonstrate the vitality of cDNA microarray technology as a tool to analyze food safety with reference to microbial pathogens and genetically modified foods. In order to bring the microarray technology to mainstream food safety, it is important to develop robust user-friendly tools that may be applied in a field setting. In addition, there needs to be a standardized process for regulatory agencies to interpret and act upon microarray-based data. The cDNA microarray approach is an emergent technology in diagnostics. Its values lie in being able to provide complimentary molecular insight when employed in addition to traditional tests for food safety, as part of a more comprehensive battery of tests

  20. Microintaglio Printing for Soft Lithography-Based in Situ Microarrays

    Directory of Open Access Journals (Sweden)

    Manish Biyani

    2015-07-01

    Full Text Available Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called “microintaglio printing technology”, for large-scale bio-microarray fabrication using a microreactor array (µRA-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density, ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era.

  1. New approach to the approximation of «dose – effect» dependence during the human somatic cells irradiation

    Directory of Open Access Journals (Sweden)

    V. F. Chekhun

    2013-09-01

    Full Text Available New data on cytogenetic approximation of the experimental cytogenetic dependence "dose - effect" based on the spline regression model that improves biological dosimetry of human radiological exposure were received. This is achieved by reducing the error of the determination of absorbed dose as compared to the traditional use of linear and linear-quadratic models and makes it possible to predict the effect of dose curves on plateau.

  2. Evaluation of current and new biomarkers in severe preeclampsia: a microarray approach reveals the VSIG4 gene as a potential blood biomarker.

    Directory of Open Access Journals (Sweden)

    Julien Textoris

    Full Text Available Preeclampsia is a placental disease characterized by hypertension and proteinuria in pregnant women, and it is associated with a high maternal and neonatal morbidity. However, circulating biomarkers that are able to predict the prognosis of preeclampsia are lacking. Thirty-eight women were included in the current study. They consisted of 19 patients with preeclampsia (13 with severe preeclampsia and 6 with non-severe preeclampsia and 19 gestational age-matched women with normal pregnancies as controls. We measured circulating factors that are associated with the coagulation pathway (including fibrinogen, fibronectin, factor VIII, antithrombin, protein S and protein C, endothelial activation (such as soluble endoglin and CD146, and the release of total and platelet-derived microparticles. These markers enabled us to discriminate the preeclampsia condition from a normal pregnancy but were not sufficient to distinguish severe from non-severe preeclampsia. We then used a microarray to study the transcriptional signature of blood samples. Preeclampsia patients exhibited a specific transcriptional program distinct from that of the control group of women. Interestingly, we also identified a severity-related transcriptional signature. Functional annotation of the upmodulated signature in severe preeclampsia highlighted two main functions related to "ribosome" and "complement". Finally, we identified 8 genes that were specifically upmodulated in severe preeclampsia compared with non-severe preeclampsia and the normotensive controls. Among these genes, we identified VSIG4 as a potential diagnostic marker of severe preeclampsia. The determination of this gene may improve the prognostic assessment of severe preeclampsia.

  3. “Positive Regulation of RNA Metabolic Process” Ontology Group Highly Regulated in Porcine Oocytes Matured In Vitro: A Microarray Approach

    Directory of Open Access Journals (Sweden)

    Piotr Celichowski

    2018-01-01

    Full Text Available The cumulus-oocyte complexes (COCs growth and development during folliculogenesis and oogenesis are accompanied by changes involving synthesis and accumulation of large amount of RNA and proteins. In this study, the transcriptomic profile of genes involved in “oocytes RNA synthesis” in relation to in vitro maturation in pigs was investigated for the first time. The RNA was isolated from oocytes before and after in vitro maturation (IVM. Interactions between differentially expressed genes/proteins belonging to “positive regulation of RNA metabolic process” ontology group were investigated by STRING10 software. Using microarray assays, we found expression of 12258 porcine transcripts. Genes with fold change higher than 2 and with corrected p value lower than 0.05 were considered as differentially expressed. The ontology group “positive regulation of RNA metabolic process” involved differential expression of AR, INHBA, WWTR1, FOS, MEF2C, VEGFA, IKZF2, IHH, RORA, MAP3K1, NFAT5, SMARCA1, EGR1, EGR2, MITF, SMAD4, APP, and NR5A1 transcripts. Since all of the presented genes were downregulated after IVM, we suggested that they might be significantly involved in regulation of RNA synthesis before reaching oocyte MII stage. Higher expression of “RNA metabolic process” related genes before IVM indicated that they might be recognized as important markers and specific “transcriptomic fingerprint” of RNA template accumulation and storage for further porcine embryos growth and development.

  4. Identifying Fishes through DNA Barcodes and Microarrays.

    Directory of Open Access Journals (Sweden)

    Marc Kochzius

    2010-09-01

    Full Text Available International fish trade reached an import value of 62.8 billion Euro in 2006, of which 44.6% are covered by the European Union. Species identification is a key problem throughout the life cycle of fishes: from eggs and larvae to adults in fisheries research and control, as well as processed fish products in consumer protection.This study aims to evaluate the applicability of the three mitochondrial genes 16S rRNA (16S, cytochrome b (cyt b, and cytochrome oxidase subunit I (COI for the identification of 50 European marine fish species by combining techniques of "DNA barcoding" and microarrays. In a DNA barcoding approach, neighbour Joining (NJ phylogenetic trees of 369 16S, 212 cyt b, and 447 COI sequences indicated that cyt b and COI are suitable for unambiguous identification, whereas 16S failed to discriminate closely related flatfish and gurnard species. In course of probe design for DNA microarray development, each of the markers yielded a high number of potentially species-specific probes in silico, although many of them were rejected based on microarray hybridisation experiments. None of the markers provided probes to discriminate the sibling flatfish and gurnard species. However, since 16S-probes were less negatively influenced by the "position of label" effect and showed the lowest rejection rate and the highest mean signal intensity, 16S is more suitable for DNA microarray probe design than cty b and COI. The large portion of rejected COI-probes after hybridisation experiments (>90% renders the DNA barcoding marker as rather unsuitable for this high-throughput technology.Based on these data, a DNA microarray containing 64 functional oligonucleotide probes for the identification of 30 out of the 50 fish species investigated was developed. It represents the next step towards an automated and easy-to-handle method to identify fish, ichthyoplankton, and fish products.

  5. Securing a cyber physical system in nuclear power plants using least square approximation and computational geometric approach

    International Nuclear Information System (INIS)

    Gawand, Hemangi Laxman; Bhattacharjee, A. K.; Roy, Kallol

    2017-01-01

    In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications

  6. Securing a cyber physical system in nuclear power plants using least square approximation and computational geometric approach

    Energy Technology Data Exchange (ETDEWEB)

    Gawand, Hemangi Laxman [Homi Bhabha National Institute, Computer Section, BARC, Mumbai (India); Bhattacharjee, A. K. [Reactor Control Division, BARC, Mumbai (India); Roy, Kallol [BHAVINI, Kalpakkam (India)

    2017-04-15

    In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.

  7. Securing a Cyber Physical System in Nuclear Power Plants Using Least Square Approximation and Computational Geometric Approach

    Directory of Open Access Journals (Sweden)

    Hemangi Laxman Gawand

    2017-04-01

    Full Text Available In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA software. A targeted attack (also termed a control aware attack on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.

  8. DNA microarrays : a molecular cloning manual

    National Research Council Canada - National Science Library

    Sambrook, Joseph; Bowtell, David

    2002-01-01

    .... DNA Microarrays provides authoritative, detailed instruction on the design, construction, and applications of microarrays, as well as comprehensive descriptions of the software tools and strategies...

  9. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  10. A systems biology approach to the pathogenesis of obesity-related nonalcoholic fatty liver disease using reverse phase protein microarrays for multiplexed cell signaling analysis.

    Science.gov (United States)

    Calvert, Valerie S; Collantes, Rochelle; Elariny, Hazem; Afendy, Arian; Baranova, Ancha; Mendoza, Michael; Goodman, Zachary; Liotta, Lance A; Petricoin, Emanuel F; Younossi, Zobair M

    2007-07-01

    Nonalcoholic fatty liver disease (NAFLD) is a common cause of chronic liver disease. Omental adipose tissue, a biologically active organ secreting adipokines and cytokines, may play a role in the development of NAFLD. We tested this hypothesis with reverse-phase protein microarrays (RPA) for multiplexed cell signaling analysis of adipose tissue from patients with NAFLD. Omental adipose tissue was obtained from 99 obese patients. Liver biopsies obtained at the time of surgery were all read by the same hepatopathologist. Adipose tissue was exposed to rapid pressure cycles to extract protein lysates. RPA was used to investigate intracellular signaling. Analysis of 54 different kinase substrates and cell signaling endpoints showed that an insulin signaling pathway is deranged in different locations in NAFLD patients. Furthermore, components of insulin receptor-mediated signaling differentiate most of the conditions on the NAFLD spectrum. For example, PKA (protein kinase A) and AKT/mTOR (protein kinase B/mammalian target of rapamycin) pathway derangement accurately discriminates patients with NASH from those with the non-progressive forms of NAFLD. PKC (protein kinase C) delta, AKT, and SHC phosphorylation changes occur in patients with simple steatosis. Amounts of the FKHR (forkhead factor Foxo1)phosphorylated at S256 residue were significantly correlated with AST/ALT ratio in all morbidly obese patients. Furthermore, amounts of cleaved caspase 9 and pp90RSK S380 were positively correlated in patients with NASH. Specific insulin pathway signaling events are altered in the adipose tissue of patients with NASH compared with patients with nonprogressive forms of NAFLD. These findings provide evidence for the role of omental fat in the pathogenesis, and potentially, the progression of NAFLD.

  11. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  12. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    Science.gov (United States)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  13. Mining meiosis and gametogenesis with DNA microarrays.

    Science.gov (United States)

    Schlecht, Ulrich; Primig, Michael

    2003-04-01

    Gametogenesis is a key developmental process that involves complex transcriptional regulation of numerous genes including many that are conserved between unicellular eukaryotes and mammals. Recent expression-profiling experiments using microarrays have provided insight into the co-ordinated transcription of several hundred genes during mitotic growth and meiotic development in budding and fission yeast. Furthermore, microarray-based studies have identified numerous loci that are regulated during the cell cycle or expressed in a germ-cell specific manner in eukaryotic model systems like Caenorhabditis elegans, Mus musculus as well as Homo sapiens. The unprecedented amount of information produced by post-genome biology has spawned novel approaches to organizing biological knowledge using currently available information technology. This review outlines experiments that contribute to an emerging comprehensive picture of the molecular machinery governing sexual reproduction in eukaryotes.

  14. Cell-Based Microarrays for In Vitro Toxicology

    Science.gov (United States)

    Wegener, Joachim

    2015-07-01

    DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.

  15. Protein microarray: sensitive and effective immunodetection for drug residues

    Directory of Open Access Journals (Sweden)

    Zer Cindy

    2010-02-01

    Full Text Available Abstract Background Veterinary drugs such as clenbuterol (CL and sulfamethazine (SM2 are low molecular weight ( Results The artificial antigens were spotted on microarray slides. Standard concentrations of the compounds were added to compete with the spotted antigens for binding to the antisera to determine the IC50. Our microarray assay showed the IC50 were 39.6 ng/ml for CL and 48.8 ng/ml for SM2, while the traditional competitive indirect-ELISA (ci-ELISA showed the IC50 were 190.7 ng/ml for CL and 156.7 ng/ml for SM2. We further validated the two methods with CL fortified chicken muscle tissues, and the protein microarray assay showed 90% recovery while the ci-ELISA had 76% recovery rate. When tested with CL-fed chicken muscle tissues, the protein microarray assay had higher sensitivity (0.9 ng/g than the ci-ELISA (0.1 ng/g for detection of CL residues. Conclusions The protein microarrays showed 4.5 and 3.5 times lower IC50 than the ci-ELISA detection for CL and SM2, respectively, suggesting that immunodetection of small molecules with protein microarray is a better approach than the traditional ELISA technique.

  16. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  17. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  18. An approximate inversion method of geoelectrical sounding data using linear and bayesian statistical approaches. Examples of Tritrivakely volcanic lake and Mahitsy area (central part of Madagascar)

    International Nuclear Information System (INIS)

    Ranaivo Nomenjanahary, F.; Rakoto, H.; Ratsimbazafy, J.B.

    1994-08-01

    This paper is concerned with resistivity sounding measurements performed from single site (vertical sounding) or from several sites (profiles) within a bounded area. The objective is to present an accurate information about the study area and to estimate the likelihood of the produced quantitative models. The achievement of this objective obviously requires quite relevant data and processing methods. It also requires interpretation methods which should take into account the probable effect of an heterogeneous structure. In front of such difficulties, the interpretation of resistivity sounding data inevitably involves the use of inversion methods. We suggest starting the interpretation in simple situation (1-D approximation), and using the rough but correct model obtained as an a-priori model for any more refined interpretation. Related to this point of view, special attention should be paid for the inverse problem applied to the resistivity sounding data. This inverse problem is nonlinear, while linearity inherent in the functional response used to describe the physical experiment. Two different approaches are used to build an approximate but higher dimensional inversion of geoelectrical data: the linear approach and the bayesian statistical approach. Some illustrations of their application in resistivity sounding data acquired at Tritrivakely volcanic lake (single site) and at Mahitsy area (several sites) will be given. (author). 28 refs, 7 figs

  19. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    CERN Document Server

    Rao, D V; Brunetti, A; Gigante, G E; Takeda, T; Itai, Y; Akatsuka, T

    2002-01-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K alpha radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)

  20. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    Energy Technology Data Exchange (ETDEWEB)

    Rao, D.V.; Cesareo, R.; Brunetti, A. [Sassari University, Istituto di Matematica e Fisica (Italy); Gigante, G.E. [Roma Universita, Dipt. di Fisica (Italy); Takeda, T.; Itai, Y. [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akatsuka, T. [Yamagata Univ., Yonezawa (Japan). Faculty of Engineering

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K{alpha} radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)

  1. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    Science.gov (United States)

    Rao, D. V.; Cesareo, R.; Brunetti, A.; Gigante, G. E.; Takeda, T.; Itai, Y.; Akatsuka, T.

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic Kα radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work.

  2. Spot detection and image segmentation in DNA microarray data.

    Science.gov (United States)

    Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune

    2005-01-01

    Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.

  3. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Direct calibration of PICKY-designed microarrays

    Directory of Open Access Journals (Sweden)

    Ronald Pamela C

    2009-10-01

    Full Text Available Abstract Background Few microarrays have been quantitatively calibrated to identify optimal hybridization conditions because it is difficult to precisely determine the hybridization characteristics of a microarray using biologically variable cDNA samples. Results Using synthesized samples with known concentrations of specific oligonucleotides, a series of microarray experiments was conducted to evaluate microarrays designed by PICKY, an oligo microarray design software tool, and to test a direct microarray calibration method based on the PICKY-predicted, thermodynamically closest nontarget information. The complete set of microarray experiment results is archived in the GEO database with series accession number GSE14717. Additional data files and Perl programs described in this paper can be obtained from the website http://www.complex.iastate.edu under the PICKY Download area. Conclusion PICKY-designed microarray probes are highly reliable over a wide range of hybridization temperatures and sample concentrations. The microarray calibration method reported here allows researchers to experimentally optimize their hybridization conditions. Because this method is straightforward, uses existing microarrays and relatively inexpensive synthesized samples, it can be used by any lab that uses microarrays designed by PICKY. In addition, other microarrays can be reanalyzed by PICKY to obtain the thermodynamically closest nontarget information for calibration.

  5. Current Knowledge on Microarray Technology - An Overview

    African Journals Online (AJOL)

    Erah

    This paper reviews basics and updates of each microarray technology and serves to .... through protein microarrays. Protein microarrays also known as protein chips are nothing but grids that ... conditioned media, patient sera, plasma and urine. Clontech ... based antibody arrays) is similar to membrane-based antibody ...

  6. Diagnostic and analytical applications of protein microarrays

    DEFF Research Database (Denmark)

    Dufva, Hans Martin; Christensen, C.B.V.

    2005-01-01

    DNA microarrays have changed the field of biomedical sciences over the past 10 years. For several reasons, antibody and other protein microarrays have not developed at the same rate. However, protein and antibody arrays have emerged as a powerful tool to complement DNA microarrays during the post...

  7. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  8. Tight-binding approximations to time-dependent density functional theory — A fast approach for the calculation of electronically excited states

    Energy Technology Data Exchange (ETDEWEB)

    Rüger, Robert, E-mail: rueger@scm.com [Scientific Computing & Modelling NV, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig (Germany); Lenthe, Erik van [Scientific Computing & Modelling NV, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Heine, Thomas [Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig (Germany); Visscher, Lucas [Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands)

    2016-05-14

    We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of two compared to TD-DFTB.

  9. Cross-platform analysis of cancer microarray data improves gene expression based classification of phenotypes

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2005-11-01

    Full Text Available Abstract Background The extensive use of DNA microarray technology in the characterization of the cell transcriptome is leading to an ever increasing amount of microarray data from cancer studies. Although similar questions for the same type of cancer are addressed in these different studies, a comparative analysis of their results is hampered by the use of heterogeneous microarray platforms and analysis methods. Results In contrast to a meta-analysis approach where results of different studies are combined on an interpretative level, we investigate here how to directly integrate raw microarray data from different studies for the purpose of supervised classification analysis. We use median rank scores and quantile discretization to derive numerically comparable measures of gene expression from different platforms. These transformed data are then used for training of classifiers based on support vector machines. We apply this approach to six publicly available cancer microarray gene expression data sets, which consist of three pairs of studies, each examining the same type of cancer, i.e. breast cancer, prostate cancer or acute myeloid leukemia. For each pair, one study was performed by means of cDNA microarrays and the other by means of oligonucleotide microarrays. In each pair, high classification accuracies (> 85% were achieved with training and testing on data instances randomly chosen from both data sets in a cross-validation analysis. To exemplify the potential of this cross-platform classification analysis, we use two leukemia microarray data sets to show that important genes with regard to the biology of leukemia are selected in an integrated analysis, which are missed in either single-set analysis. Conclusion Cross-platform classification of multiple cancer microarray data sets yields discriminative gene expression signatures that are found and validated on a large number of microarray samples, generated by different laboratories and

  10. CONFIRMING MICROARRAY DATA--IS IT REALLY NECESSARY?

    Science.gov (United States)

    The generation of corroborative data has become a commonly used approach for ensuring the veracity of microarray data. Indeed, the need to conduct corroborative studies has now become official editorial policy for at least two journals, and several more are considering introducin...

  11. A Java-based tool for the design of classification microarrays.

    Science.gov (United States)

    Meng, Da; Broschat, Shira L; Call, Douglas R

    2008-08-04

    Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for

  12. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  13. Comparing transformation methods for DNA microarray data

    Directory of Open Access Journals (Sweden)

    Zwinderman Aeilko H

    2004-06-01

    Full Text Available Abstract Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects, and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  14. Quasi-homogenous approximation for description of the properties of dispersed systems. The basic approaches to model hardening processes in nanodispersed silica systems. Part 1. Statical polymer method

    Directory of Open Access Journals (Sweden)

    KUDRYAVTSEV Pavel Gennadievich

    2015-02-01

    Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.

  15. A Customized DNA Microarray for Microbial Source Tracking ...

    Science.gov (United States)

    It is estimated that more than 160, 000 miles of rivers and streams in the United States are impaired due to the presence of waterborne pathogens. These pathogens typically originate from human and other animal fecal pollution sources; therefore, a rapid microbial source tracking (MST) method is needed to facilitate water quality assessment and impaired water remediation. We report a novel qualitative DNA microarray technology consisting of 453 probes for the detection of general fecal and host-associated bacteria, viruses, antibiotic resistance, and other environmentally relevant genetic indicators. A novel data normalization and reduction approach is also presented to help alleviate false positives often associated with high-density microarray applications. To evaluate the performance of the approach, DNA and cDNA was isolated from swine, cattle, duck, goose and gull fecal reference samples, as well as soiled poultry liter and raw municipal sewage. Based on nonmetric multidimensional scaling analysis of results, findings suggest that the novel microarray approach may be useful for pathogen detection and identification of fecal contamination in recreational waters. The ability to simultaneously detect a large collection of environmentally important genetic indicators in a single test has the potential to provide water quality managers with a wide range of information in a short period of time. Future research is warranted to measure microarray performance i

  16. Microarray expression profiling of human dental pulp from single subject.

    Science.gov (United States)

    Tete, Stefano; Mastrangelo, Filiberto; Scioletti, Anna Paola; Tranasi, Michelangelo; Raicu, Florina; Paolantonio, Michele; Stuppia, Liborio; Vinci, Raffaele; Gherlone, Enrico; Ciampoli, Cristian; Sberna, Maria Teresa; Conti, Pio

    2008-01-01

    Microarray is a recently developed simultaneous analysis of expression patterns of thousand of genes. The aim of this research was to evaluate the expression profile of human healthy dental pulp in order to find the presence of genes activated and encoding for proteins involved in the physiological process of human dental pulp. We report data obtained by analyzing expression profiles of human tooth pulp from single subjects, using an approach based on the amplification of the total RNA. Experiments were performed on a high-density array able to analyse about 21,000 oligonucleotide sequences of about 70 bases in duplicate, using an approach based on the amplification of the total RNA from the pulp of a single tooth. Obtained data were analyzed using the S.A.M. system (Significance Analysis of Microarray) and genes were merged according to their molecular functions and biological process by the Onto-Express software. The microarray analysis revealed 362 genes with specific pulp expression. Genes showing significant high expression were classified in genes involved in tooth development, protoncogenes, genes of collagen, DNAse, Metallopeptidases and Growth factors. We report a microarray analysis, carried out by extraction of total RNA from specimens of healthy human dental pulp tissue. This approach represents a powerful tool in the study of human normal and pathological pulp, allowing minimization of the genetic variability due to the pooling of samples from different individuals.

  17. Integrating Biological Perspectives:. a Quantum Leap for Microarray Expression Analysis

    Science.gov (United States)

    Wanke, Dierk; Kilian, Joachim; Bloss, Ulrich; Mangelsen, Elke; Supper, Jochen; Harter, Klaus; Berendzen, Kenneth W.

    2009-02-01

    Biologists and bioinformatic scientists cope with the analysis of transcript abundance and the extraction of meaningful information from microarray expression data. By exploiting biological information accessible in public databases, we try to extend our current knowledge over the plant model organism Arabidopsis thaliana. Here, we give two examples of increasing the quality of information gained from large scale expression experiments by the integration of microarray-unrelated biological information: First, we utilize Arabidopsis microarray data to demonstrate that expression profiles are usually conserved between orthologous genes of different organisms. In an initial step of the analysis, orthology has to be inferred unambiguously, which then allows comparison of expression profiles between orthologs. We make use of the publicly available microarray expression data of Arabidopsis and barley, Hordeum vulgare. We found a generally positive correlation in expression trajectories between true orthologs although both organisms are only distantly related in evolutionary time scale. Second, extracting clusters of co-regulated genes implies similarities in transcriptional regulation via similar cis-regulatory elements (CREs). Vice versa approaches, where co-regulated gene clusters are found by investigating on CREs were not successful in general. Nonetheless, in some cases the presence of CREs in a defined position, orientation or CRE-combinations is positively correlated with co-regulated gene clusters. Here, we make use of genes involved in the phenylpropanoid biosynthetic pathway, to give one positive example for this approach.

  18. An improved saddlepoint approximation.

    Science.gov (United States)

    Gillespie, Colin S; Renshaw, Eric

    2007-08-01

    Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.

  19. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  20. The Porcelain Crab Transcriptome and PCAD, the Porcelain Crab Microarray and Sequence Database

    Energy Technology Data Exchange (ETDEWEB)

    Tagmount, Abderrahmane; Wang, Mei; Lindquist, Erika; Tanaka, Yoshihiro; Teranishi, Kristen S.; Sunagawa, Shinichi; Wong, Mike; Stillman, Jonathon H.

    2010-01-27

    Background: With the emergence of a completed genome sequence of the freshwater crustacean Daphnia pulex, construction of genomic-scale sequence databases for additional crustacean sequences are important for comparative genomics and annotation. Porcelain crabs, genus Petrolisthes, have been powerful crustacean models for environmental and evolutionary physiology with respect to thermal adaptation and understanding responses of marine organisms to climate change. Here, we present a large-scale EST sequencing and cDNA microarray database project for the porcelain crab Petrolisthes cinctipes. Methodology/Principal Findings: A set of ~;;30K unique sequences (UniSeqs) representing ~;;19K clusters were generated from ~;;98K high quality ESTs from a set of tissue specific non-normalized and mixed-tissue normalized cDNA libraries from the porcelain crab Petrolisthes cinctipes. Homology for each UniSeq was assessed using BLAST, InterProScan, GO and KEGG database searches. Approximately 66percent of the UniSeqs had homology in at least one of the databases. All EST and UniSeq sequences along with annotation results and coordinated cDNA microarray datasets have been made publicly accessible at the Porcelain Crab Array Database (PCAD), a feature-enriched version of the Stanford and Longhorn Array Databases.Conclusions/Significance: The EST project presented here represents the third largest sequencing effort for any crustacean, and the largest effort for any crab species. Our assembly and clustering results suggest that our porcelain crab EST data set is equally diverse to the much larger EST set generated in the Daphnia pulex genome sequencing project, and thus will be an important resource to the Daphnia research community. Our homology results support the pancrustacea hypothesis and suggest that Malacostraca may be ancestral to Branchiopoda and Hexapoda. Our results also suggest that our cDNA microarrays cover as much of the transcriptome as can reasonably be captured in

  1. Unraveling the Rat Intestine, Spleen and Liver Genome-Wide Transcriptome after the Oral Administration of Lavender Oil by a Two-Color Dye-Swap DNA Microarray Approach.

    Science.gov (United States)

    Kubo, Hiroko; Shibato, Junko; Saito, Tomomi; Ogawa, Tetsuo; Rakwal, Randeep; Shioda, Seiji

    2015-01-01

    The use of lavender oil (LO)--a commonly, used oil in aromatherapy, with well-defined volatile components linalool and linalyl acetate--in non-traditional medicine is increasing globally. To understand and demonstrate the potential positive effects of LO on the body, we have established an animal model in this current study, investigating the orally administered LO effects genome wide in the rat small intestine, spleen, and liver. The rats were administered LO at 5 mg/kg (usual therapeutic dose in humans) followed by the screening of differentially expressed genes in the tissues, using a 4×44-K whole-genome rat chip (Agilent microarray platform; Agilent Technologies, Palo Alto, CA, USA) in conjunction with a dye-swap approach, a novelty of this study. Fourteen days after LO treatment and compared with a control group (sham), a total of 156 and 154 up (≧ 1.5-fold)- and down (≦ 0.75-fold)-regulated genes, 174 and 66 up- (≧ 1.5-fold)- and down (≦ 0.75-fold)-regulated genes, and 222 and 322 up- (≧ 1.5-fold)- and down (≦ 0.75-fold)-regulated genes showed differential expression at the mRNA level in the small intestine, spleen and liver, respectively. The reverse transcription-polymerase chain reaction (RT-PCR) validation of highly up- and down-regulated genes confirmed the regulation of the Papd4, Lrp1b, Alb, Cyr61, Cyp2c, and Cxcl1 genes by LO as examples in these tissues. Using bioinformatics, including Ingenuity Pathway Analysis (IPA), differentially expressed genes were functionally categorized by their Gene Ontology (GO) and biological function and network analysis, revealing their diverse functions and potential roles in LO-mediated effects in rat. Further IPA analysis in particular unraveled the presence of novel genes, such as Papd4, Or8k5, Gprc5b, Taar5, Trpc6, Pld2 and Onecut3 (up-regulated top molecules) and Tnf, Slc45a4, Slc25a23 and Samt4 (down-regulated top molecules), to be influenced by LO treatment in the small intestine, spleen and liver

  2. Unraveling the Rat Intestine, Spleen and Liver Genome-Wide Transcriptome after the Oral Administration of Lavender Oil by a Two-Color Dye-Swap DNA Microarray Approach.

    Directory of Open Access Journals (Sweden)

    Hiroko Kubo

    Full Text Available The use of lavender oil (LO--a commonly, used oil in aromatherapy, with well-defined volatile components linalool and linalyl acetate--in non-traditional medicine is increasing globally. To understand and demonstrate the potential positive effects of LO on the body, we have established an animal model in this current study, investigating the orally administered LO effects genome wide in the rat small intestine, spleen, and liver. The rats were administered LO at 5 mg/kg (usual therapeutic dose in humans followed by the screening of differentially expressed genes in the tissues, using a 4×44-K whole-genome rat chip (Agilent microarray platform; Agilent Technologies, Palo Alto, CA, USA in conjunction with a dye-swap approach, a novelty of this study. Fourteen days after LO treatment and compared with a control group (sham, a total of 156 and 154 up (≧ 1.5-fold- and down (≦ 0.75-fold-regulated genes, 174 and 66 up- (≧ 1.5-fold- and down (≦ 0.75-fold-regulated genes, and 222 and 322 up- (≧ 1.5-fold- and down (≦ 0.75-fold-regulated genes showed differential expression at the mRNA level in the small intestine, spleen and liver, respectively. The reverse transcription-polymerase chain reaction (RT-PCR validation of highly up- and down-regulated genes confirmed the regulation of the Papd4, Lrp1b, Alb, Cyr61, Cyp2c, and Cxcl1 genes by LO as examples in these tissues. Using bioinformatics, including Ingenuity Pathway Analysis (IPA, differentially expressed genes were functionally categorized by their Gene Ontology (GO and biological function and network analysis, revealing their diverse functions and potential roles in LO-mediated effects in rat. Further IPA analysis in particular unraveled the presence of novel genes, such as Papd4, Or8k5, Gprc5b, Taar5, Trpc6, Pld2 and Onecut3 (up-regulated top molecules and Tnf, Slc45a4, Slc25a23 and Samt4 (down-regulated top molecules, to be influenced by LO treatment in the small intestine, spleen and

  3. On the WKBJ approximation

    International Nuclear Information System (INIS)

    El Sawi, M.

    1983-07-01

    A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)

  4. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  5. The quasilocalized charge approximation

    International Nuclear Information System (INIS)

    Kalman, G J; Golden, K I; Donko, Z; Hartmann, P

    2005-01-01

    The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two

  6. Dimension reduction methods for microarray data: a review

    Directory of Open Access Journals (Sweden)

    Rabia Aziz

    2017-03-01

    Full Text Available Dimension reduction has become inevitable for pre-processing of high dimensional data. “Gene expression microarray data” is an instance of such high dimensional data. Gene expression microarray data displays the maximum number of genes (features simultaneously at a molecular level with a very small number of samples. The copious numbers of genes are usually provided to a learning algorithm for producing a complete characterization of the classification task. However, most of the times the majority of the genes are irrelevant or redundant to the learning task. It will deteriorate the learning accuracy and training speed as well as lead to the problem of overfitting. Thus, dimension reduction of microarray data is a crucial preprocessing step for prediction and classification of disease. Various feature selection and feature extraction techniques have been proposed in the literature to identify the genes, that have direct impact on the various machine learning algorithms for classification and eliminate the remaining ones. This paper describes the taxonomy of dimension reduction methods with their characteristics, evaluation criteria, advantages and disadvantages. It also presents a review of numerous dimension reduction approaches for microarray data, mainly those methods that have been proposed over the past few years.

  7. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    Pedersen, Steffen Højris

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...

  8. Construction of a cDNA microarray derived from the ascidian Ciona intestinalis.

    Science.gov (United States)

    Azumi, Kaoru; Takahashi, Hiroki; Miki, Yasufumi; Fujie, Manabu; Usami, Takeshi; Ishikawa, Hisayoshi; Kitayama, Atsusi; Satou, Yutaka; Ueno, Naoto; Satoh, Nori

    2003-10-01

    A cDNA microarray was constructed from a basal chordate, the ascidian Ciona intestinalis. The draft genome of Ciona has been read and inferred to contain approximately 16,000 protein-coding genes, and cDNAs for transcripts of 13,464 genes have been characterized and compiled as the "Ciona intestinalis Gene Collection Release I". In the present study, we constructed a cDNA microarray of these 13,464 Ciona genes. A preliminary experiment with Cy3- and Cy5-labeled probes showed extensive differential gene expression between fertilized eggs and larvae. In addition, there was a good correlation between results obtained by the present microarray analysis and those from previous EST analyses. This first microarray of a large collection of Ciona intestinalis cDNA clones should facilitate the analysis of global gene expression and gene networks during the embryogenesis of basal chordates.

  9. Implementation of mutual information and bayes theorem for classification microarray data

    Science.gov (United States)

    Dwifebri Purbolaksono, Mahendra; Widiastuti, Kurnia C.; Syahrul Mubarok, Mohamad; Adiwijaya; Aminy Ma’ruf, Firda

    2018-03-01

    Microarray Technology is one of technology which able to read the structure of gen. The analysis is important for this technology. It is for deciding which attribute is more important than the others. Microarray technology is able to get cancer information to diagnose a person’s gen. Preparation of microarray data is a huge problem and takes a long time. That is because microarray data contains high number of insignificant and irrelevant attributes. So, it needs a method to reduce the dimension of microarray data without eliminating important information in every attribute. This research uses Mutual Information to reduce dimension. System is built with Machine Learning approach specifically Bayes Theorem. This theorem uses a statistical and probability approach. By combining both methods, it will be powerful for Microarray Data Classification. The experiment results show that system is good to classify Microarray data with highest F1-score using Bayesian Network by 91.06%, and Naïve Bayes by 88.85%.

  10. MicroArray Facility: a laboratory information management system with extended support for Nylon based technologies

    Directory of Open Access Journals (Sweden)

    Beaudoing Emmanuel

    2006-09-01

    Full Text Available Abstract Background High throughput gene expression profiling (GEP is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option. GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. Results MAF (MicroArray Facility is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking, data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. Conclusion MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for

  11. Advanced microarray technologies for clinical diagnostics

    NARCIS (Netherlands)

    Pierik, Anke

    2011-01-01

    DNA microarrays become increasingly important in the field of clinical diagnostics. These microarrays, also called DNA chips, are small solid substrates, typically having a maximum surface area of a few cm2, onto which many spots are arrayed in a pre-determined pattern. Each of these spots contains

  12. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  13. Carbohydrate Microarrays in Plant Science

    DEFF Research Database (Denmark)

    Fangel, Jonatan Ulrik; Pedersen, H.L.; Vidal-Melgosa, S.

    2012-01-01

    Almost all plant cells are surrounded by glycan-rich cell walls, which form much of the plant body and collectively are the largest source of biomass on earth. Plants use polysaccharides for support, defense, signaling, cell adhesion, and as energy storage, and many plant glycans are also important...... industrially and nutritionally. Understanding the biological roles of plant glycans and the effective exploitation of their useful properties requires a detailed understanding of their structures, occurrence, and molecular interactions. Microarray technology has revolutionized the massively high...... for plant research and can be used to map glycan populations across large numbers of samples to screen antibodies, carbohydrate binding proteins, and carbohydrate binding modules and to investigate enzyme activities....

  14. Microarray technology for major chemical contaminants analysis in food: current status and prospects.

    Science.gov (United States)

    Zhang, Zhaowei; Li, Peiwu; Hu, Xiaofeng; Zhang, Qi; Ding, Xiaoxia; Zhang, Wen

    2012-01-01

    Chemical contaminants in food have caused serious health issues in both humans and animals. Microarray technology is an advanced technique suitable for the analysis of chemical contaminates. In particular, immuno-microarray approach is one of the most promising methods for chemical contaminants analysis. The use of microarrays for the analysis of chemical contaminants is the subject of this review. Fabrication strategies and detection methods for chemical contaminants are discussed in detail. Application to the analysis of mycotoxins, biotoxins, pesticide residues, and pharmaceutical residues is also described. Finally, future challenges and opportunities are discussed.

  15. Seeded Bayesian Networks: Constructing genetic networks from microarray data

    Directory of Open Access Journals (Sweden)

    Quackenbush John

    2008-07-01

    Full Text Available Abstract Background DNA microarrays and other genomics-inspired technologies provide large datasets that often include hidden patterns of correlation between genes reflecting the complex processes that underlie cellular metabolism and physiology. The challenge in analyzing large-scale expression data has been to extract biologically meaningful inferences regarding these processes – often represented as networks – in an environment where the datasets are often imperfect and biological noise can obscure the actual signal. Although many techniques have been developed in an attempt to address these issues, to date their ability to extract meaningful and predictive network relationships has been limited. Here we describe a method that draws on prior information about gene-gene interactions to infer biologically relevant pathways from microarray data. Our approach consists of using preliminary networks derived from the literature and/or protein-protein interaction data as seeds for a Bayesian network analysis of microarray results. Results Through a bootstrap analysis of gene expression data derived from a number of leukemia studies, we demonstrate that seeded Bayesian Networks have the ability to identify high-confidence gene-gene interactions which can then be validated by comparison to other sources of pathway data. Conclusion The use of network seeds greatly improves the ability of Bayesian Network analysis to learn gene interaction networks from gene expression data. We demonstrate that the use of seeds derived from the biomedical literature or high-throughput protein-protein interaction data, or the combination, provides improvement over a standard Bayesian Network analysis, allowing networks involving dynamic processes to be deduced from the static snapshots of biological systems that represent the most common source of microarray data. Software implementing these methods has been included in the widely used TM4 microarray analysis package.

  16. Workflows for microarray data processing in the Kepler environment

    Directory of Open Access Journals (Sweden)

    Stropp Thomas

    2012-05-01

    traditional shell scripting or R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. Conclusions We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.

  17. Workflows for microarray data processing in the Kepler environment.

    Science.gov (United States)

    Stropp, Thomas; McPhillips, Timothy; Ludäscher, Bertram; Bieda, Mark

    2012-05-17

    /BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.

  18. High quality protein microarray using in situ protein purification

    Directory of Open Access Journals (Sweden)

    Fleischmann Robert D

    2009-08-01

    protein solubility and denaturation problems caused by buffer exchange steps and freeze-thaw cycles, which are associated with resin-based purification, intermittent protein storage and deposition on microarrays. Conclusion An optimized platform for in situ protein purification on microarray slides using His-tagged recombinant proteins is a desirable tool for the screening of novel protein functions and protein-protein interactions. In the context of immunoproteomics, such protein microarrays are complimentary to approaches using non-recombinant methods to discover and characterize bacterial antigens.

  19. Workflows for microarray data processing in the Kepler environment

    Science.gov (United States)

    2012-01-01

    R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. Conclusions We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services. PMID:22594911

  20. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  1. Annotating breast cancer microarray samples using ontologies

    Science.gov (United States)

    Liu, Hongfang; Li, Xin; Yoon, Victoria; Clarke, Robert

    2008-01-01

    As the most common cancer among women, breast cancer results from the accumulation of mutations in essential genes. Recent advance in high-throughput gene expression microarray technology has inspired researchers to use the technology to assist breast cancer diagnosis, prognosis, and treatment prediction. However, the high dimensionality of microarray experiments and public access of data from many experiments have caused inconsistencies which initiated the development of controlled terminologies and ontologies for annotating microarray experiments, such as the standard microarray Gene Expression Data (MGED) ontology (MO). In this paper, we developed BCM-CO, an ontology tailored specifically for indexing clinical annotations of breast cancer microarray samples from the NCI Thesaurus. Our research showed that the coverage of NCI Thesaurus is very limited with respect to i) terms used by researchers to describe breast cancer histology (covering 22 out of 48 histology terms); ii) breast cancer cell lines (covering one out of 12 cell lines); and iii) classes corresponding to the breast cancer grading and staging. By incorporating a wider range of those terms into BCM-CO, we were able to indexed breast cancer microarray samples from GEO using BCM-CO and MGED ontology and developed a prototype system with web interface that allows the retrieval of microarray data based on the ontology annotations. PMID:18999108

  2. Simulation of microarray data with realistic characteristics

    Directory of Open Access Journals (Sweden)

    Lehmussola Antti

    2006-07-01

    Full Text Available Abstract Background Microarray technologies have become common tools in biological research. As a result, a need for effective computational methods for data analysis has emerged. Numerous different algorithms have been proposed for analyzing the data. However, an objective evaluation of the proposed algorithms is not possible due to the lack of biological ground truth information. To overcome this fundamental problem, the use of simulated microarray data for algorithm validation has been proposed. Results We present a microarray simulation model which can be used to validate different kinds of data analysis algorithms. The proposed model is unique in the sense that it includes all the steps that affect the quality of real microarray data. These steps include the simulation of biological ground truth data, applying biological and measurement technology specific error models, and finally simulating the microarray slide manufacturing and hybridization. After all these steps are taken into account, the simulated data has realistic biological and statistical characteristics. The applicability of the proposed model is demonstrated by several examples. Conclusion The proposed microarray simulation model is modular and can be used in different kinds of applications. It includes several error models that have been proposed earlier and it can be used with different types of input data. The model can be used to simulate both spotted two-channel and oligonucleotide based single-channel microarrays. All this makes the model a valuable tool for example in validation of data analysis algorithms.

  3. Genotyping microarray (gene chip) for the ABCR (ABCA4) gene.

    Science.gov (United States)

    Jaakson, K; Zernant, J; Külm, M; Hutchinson, A; Tonisson, N; Glavac, D; Ravnik-Glavac, M; Hawlina, M; Meltzer, M R; Caruso, R C; Testa, F; Maugeri, A; Hoyng, C B; Gouras, P; Simonelli, F; Lewis, R A; Lupski, J R; Cremers, F P M; Allikmets, R

    2003-11-01

    Genetic variation in the ABCR (ABCA4) gene has been associated with five distinct retinal phenotypes, including Stargardt disease/fundus flavimaculatus (STGD/FFM), cone-rod dystrophy (CRD), and age-related macular degeneration (AMD). Comparative genetic analyses of ABCR variation and diagnostics have been complicated by substantial allelic heterogeneity and by differences in screening methods. To overcome these limitations, we designed a genotyping microarray (gene chip) for ABCR that includes all approximately 400 disease-associated and other variants currently described, enabling simultaneous detection of all known ABCR variants. The ABCR genotyping microarray (the ABCR400 chip) was constructed by the arrayed primer extension (APEX) technology. Each sequence change in ABCR was included on the chip by synthesis and application of sequence-specific oligonucleotides. We validated the chip by screening 136 confirmed STGD patients and 96 healthy controls, each of whom we had analyzed previously by single strand conformation polymorphism (SSCP) technology and/or heteroduplex analysis. The microarray was >98% effective in determining the existing genetic variation and was comparable to direct sequencing in that it yielded many sequence changes undetected by SSCP. In STGD patient cohorts, the efficiency of the array to detect disease-associated alleles was between 54% and 78%, depending on the ethnic composition and degree of clinical and molecular characterization of a cohort. In addition, chip analysis suggested a high carrier frequency (up to 1:10) of ABCR variants in the general population. The ABCR genotyping microarray is a robust, cost-effective, and comprehensive screening tool for variation in one gene in which mutations are responsible for a substantial fraction of retinal disease. The ABCR chip is a prototype for the next generation of screening and diagnostic tools in ophthalmic genetics, bridging clinical and scientific research. Copyright 2003 Wiley

  4. Universal ligation-detection-reaction microarray applied for compost microbes

    Directory of Open Access Journals (Sweden)

    Romantschuk Martin

    2008-12-01

    Full Text Available Abstract Background Composting is one of the methods utilised in recycling organic communal waste. The composting process is dependent on aerobic microbial activity and proceeds through a succession of different phases each dominated by certain microorganisms. In this study, a ligation-detection-reaction (LDR based microarray method was adapted for species-level detection of compost microbes characteristic of each stage of the composting process. LDR utilises the specificity of the ligase enzyme to covalently join two adjacently hybridised probes. A zip-oligo is attached to the 3'-end of one probe and fluorescent label to the 5'-end of the other probe. Upon ligation, the probes are combined in the same molecule and can be detected in a specific location on a universal microarray with complementary zip-oligos enabling equivalent hybridisation conditions for all probes. The method was applied to samples from Nordic composting facilities after testing and optimisation with fungal pure cultures and environmental clones. Results Probes targeted for fungi were able to detect 0.1 fmol of target ribosomal PCR product in an artificial reaction mixture containing 100 ng competing fungal ribosomal internal transcribed spacer (ITS area or herring sperm DNA. The detection level was therefore approximately 0.04% of total DNA. Clone libraries were constructed from eight compost samples. The LDR microarray results were in concordance with the clone library sequencing results. In addition a control probe was used to monitor the per-spot hybridisation efficiency on the array. Conclusion This study demonstrates that the LDR microarray method is capable of sensitive and accurate species-level detection from a complex microbial community. The method can detect key species from compost samples, making it a basis for a tool for compost process monitoring in industrial facilities.

  5. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  6. Quasi-homogenous approximation for description of the properties of dispersed systems. The basic approaches to model hardening processes in nanodispersed silica systems. Part 4. The Main Approaches to Modeling the Kinetics of the Sol-Gel Transition

    Directory of Open Access Journals (Sweden)

    KUDRYAVTSEV Pavel Gennadievich

    2015-08-01

    Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for description of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the methods of fractal theory. The method of fractal polymer can be applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.

  7. Nanomedicine, microarrays and their applications in clinical microbiology

    Directory of Open Access Journals (Sweden)

    Özcan Deveci

    2010-12-01

    Full Text Available Growing interest in the future medical applications of nanotechnology is leading to the emergence of a new scientific field that called as “nanomedicine”. Nanomedicine may be defined as the investigating, treating, reconstructing and controlling human biology and health at the molecular level, using engineered nanodevices and nanostructures. Microarray technology is a revolutionary tool for elucidating roles of genes in infectious diseases, shifting from traditional methods of research to integrated approaches. This technology has great potential to provide medical diagnosis, monitor treatment and help in the development of new tools for infectious disease prevention and/or management. The aim of this paper is to provide an overview of the current application of microarray platforms and nanomedicine in the study of experimental microbiology and the impact of this technology in clinical settings.

  8. Identification of potential biomarkers from microarray experiments using multiple criteria optimization

    International Nuclear Information System (INIS)

    Sánchez-Peña, Matilde L; Isaza, Clara E; Pérez-Morales, Jaileene; Rodríguez-Padilla, Cristina; Castro, José M; Cabrera-Ríos, Mauricio

    2013-01-01

    Microarray experiments are capable of determining the relative expression of tens of thousands of genes simultaneously, thus resulting in very large databases. The analysis of these databases and the extraction of biologically relevant knowledge from them are challenging tasks. The identification of potential cancer biomarker genes is one of the most important aims for microarray analysis and, as such, has been widely targeted in the literature. However, identifying a set of these genes consistently across different experiments, researches, microarray platforms, or cancer types is still an elusive endeavor. Besides the inherent difficulty of the large and nonconstant variability in these experiments and the incommensurability between different microarray technologies, there is the issue of the users having to adjust a series of parameters that significantly affect the outcome of the analyses and that do not have a biological or medical meaning. In this study, the identification of potential cancer biomarkers from microarray data is casted as a multiple criteria optimization (MCO) problem. The efficient solutions to this problem, found here through data envelopment analysis (DEA), are associated to genes that are proposed as potential cancer biomarkers. The method does not require any parameter adjustment by the user, and thus fosters repeatability. The approach also allows the analysis of different microarray experiments, microarray platforms, and cancer types simultaneously. The results include the analysis of three publicly available microarray databases related to cervix cancer. This study points to the feasibility of modeling the selection of potential cancer biomarkers from microarray data as an MCO problem and solve it using DEA. Using MCO entails a new optic to the identification of potential cancer biomarkers as it does not require the definition of a threshold value to establish significance for a particular gene and the selection of a normalization

  9. Tracking control of a closed-chain five-bar robot with two degrees of freedom by integration of an approximation-based approach and mechanical design.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J

    2012-10-01

    The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.

  10. Gene Expression and Microarray Investigation of Dendrobium ...

    African Journals Online (AJOL)

    blood glucose > 16.7 mmol/L were used as the model group and treated with Dendrobium mixture. (DEN ... Keywords: Diabetes, Gene expression, Dendrobium mixture, Microarray testing ..... homeostasis in airway smooth muscle. Am J.

  11. SLIMarray: Lightweight software for microarray facility management

    Directory of Open Access Journals (Sweden)

    Marzolf Bruz

    2006-10-01

    Full Text Available Abstract Background Microarray core facilities are commonplace in biological research organizations, and need systems for accurately tracking various logistical aspects of their operation. Although these different needs could be handled separately, an integrated management system provides benefits in organization, automation and reduction in errors. Results We present SLIMarray (System for Lab Information Management of Microarrays, an open source, modular database web application capable of managing microarray inventories, sample processing and usage charges. The software allows modular configuration and is well suited for further development, providing users the flexibility to adapt it to their needs. SLIMarray Lite, a version of the software that is especially easy to install and run, is also available. Conclusion SLIMarray addresses the previously unmet need for free and open source software for managing the logistics of a microarray core facility.

  12. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  13. Tissue Microarray TechnologyA Brief Review

    Directory of Open Access Journals (Sweden)

    Ramya S Vokuda

    2018-01-01

    Full Text Available In this era of modern revolutionisation in the field of medical laboratory technology, everyone is aiming at taking the innovations from laboratory to bed side. One such technique that is most relevant to the pathologic community is Tissue Microarray (TMA technology. This is becoming quite popular amongst all the members of this family, right from laboratory scientists to clinicians and residents to technologists. The reason for this technique to gain popularity is attributed to its cost effectiveness and time saving protocols. Though, every technique is accompanied by disadvantages, the benefits out number them. This technique is very versatile as many downstream molecular assays such as immunohistochemistry, cytogenetic studies, Fluorescent In situ-Hybridisation (FISH etc., can be carried out on a single slide with multiple numbers of samples. It is a very practical approach that aids effectively to identify novel biomarkers in cancer diagnostics and therapeutics. It helps in assessing the molecular markers on a large scale very quickly. Also, the quality assurance protocols in pathological laboratory has exploited TMA to a great extent. However, the application of TMA technology is beyond oncology. This review shall focus on the different aspects of this technology such as construction of TMA, instrumentation, types, advantages and disadvantages and utilisation of the technique in various disease conditions.

  14. PATMA: parser of archival tissue microarray

    Directory of Open Access Journals (Sweden)

    Lukasz Roszkowiak

    2016-12-01

    Full Text Available Tissue microarrays are commonly used in modern pathology for cancer tissue evaluation, as it is a very potent technique. Tissue microarray slides are often scanned to perform computer-aided histopathological analysis of the tissue cores. For processing the image, splitting the whole virtual slide into images of individual cores is required. The only way to distinguish cores corresponding to specimens in the tissue microarray is through their arrangement. Unfortunately, distinguishing the correct order of cores is not a trivial task as they are not labelled directly on the slide. The main aim of this study was to create a procedure capable of automatically finding and extracting cores from archival images of the tissue microarrays. This software supports the work of scientists who want to perform further image processing on single cores. The proposed method is an efficient and fast procedure, working in fully automatic or semi-automatic mode. A total of 89% of punches were correctly extracted with automatic selection. With an addition of manual correction, it is possible to fully prepare the whole slide image for extraction in 2 min per tissue microarray. The proposed technique requires minimum skill and time to parse big array of cores from tissue microarray whole slide image into individual core images.

  15. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  16. The efficacy of microarray screening for autosomal recessive retinitis pigmentosa in routine clinical practice

    Science.gov (United States)

    van Huet, Ramon A. C.; Pierrache, Laurence H.M.; Meester-Smoor, Magda A.; Klaver, Caroline C.W.; van den Born, L. Ingeborgh; Hoyng, Carel B.; de Wijs, Ilse J.; Collin, Rob W. J.; Hoefsloot, Lies H.

    2015-01-01

    Purpose To determine the efficacy of multiple versions of a commercially available arrayed primer extension (APEX) microarray chip for autosomal recessive retinitis pigmentosa (arRP). Methods We included 250 probands suspected of arRP who were genetically analyzed with the APEX microarray between January 2008 and November 2013. The mode of inheritance had to be autosomal recessive according to the pedigree (including isolated cases). If the microarray identified a heterozygous mutation, we performed Sanger sequencing of exons and exon–intron boundaries of that specific gene. The efficacy of this microarray chip with the additional Sanger sequencing approach was determined by the percentage of patients that received a molecular diagnosis. We also collected data from genetic tests other than the APEX analysis for arRP to provide a detailed description of the molecular diagnoses in our study cohort. Results The APEX microarray chip for arRP identified the molecular diagnosis in 21 (8.5%) of the patients in our cohort. Additional Sanger sequencing yielded a second mutation in 17 patients (6.8%), thereby establishing the molecular diagnosis. In total, 38 patients (15.2%) received a molecular diagnosis after analysis using the microarray and additional Sanger sequencing approach. Further genetic analyses after a negative result of the arRP microarray (n = 107) resulted in a molecular diagnosis of arRP (n = 23), autosomal dominant RP (n = 5), X-linked RP (n = 2), and choroideremia (n = 1). Conclusions The efficacy of the commercially available APEX microarray chips for arRP appears to be low, most likely caused by the limitations of this technique and the genetic and allelic heterogeneity of RP. Diagnostic yields up to 40% have been reported for next-generation sequencing (NGS) techniques that, as expected, thereby outperform targeted APEX analysis. PMID:25999674

  17. Debye–Einstein approximation approach to calculate the lattice specific heat and related parameters for a Si nanowire

    Directory of Open Access Journals (Sweden)

    A. KH. Alassafee

    2017-11-01

    Full Text Available The modified Debye–Einstein approximation model is used to calculate nanoscale size-dependent values of Gruneisen parameters and lattice specific heat capacity for Si nanowires. All parameters forming the model, including Debye temperatures, bulk moduli, the lattice thermal expansion and the lattice volume, are calculated according to their nanoscale size dependence. Values for lattice volume Gruneisen parameters increase with the decrease of the nanowires’ diameter, while all other parameters decrease. The nanosize dependence of lattice thermal parameters agree with other reported theoretical results. Keywords: Lattice specific heat capacity, Gruneisen parameter, Debye–Einstein model, Si nanowires

  18. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2012-05-01

    Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.

  19. Approximate symmetries of Hamiltonians

    Science.gov (United States)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  20. Dynamic, electronically switchable surfaces for membrane protein microarrays.

    Science.gov (United States)

    Tang, C S; Dusseiller, M; Makohliso, S; Heuschkel, M; Sharma, S; Keller, B; Vörös, J

    2006-02-01

    Microarray technology is a powerful tool that provides a high throughput of bioanalytical information within a single experiment. These miniaturized and parallelized binding assays are highly sensitive and have found widespread popularity especially during the genomic era. However, as drug diagnostics studies are often targeted at membrane proteins, the current arraying technologies are ill-equipped to handle the fragile nature of the protein molecules. In addition, to understand the complex structure and functions of proteins, different strategies to immobilize the probe molecules selectively onto a platform for protein microarray are required. We propose a novel approach to create a (membrane) protein microarray by using an indium tin oxide (ITO) microelectrode array with an electronic multiplexing capability. A polycationic, protein- and vesicle-resistant copolymer, poly(l-lysine)-grafted-poly(ethylene glycol) (PLL-g-PEG), is exposed to and adsorbed uniformly onto the microelectrode array, as a passivating adlayer. An electronic stimulation is then applied onto the individual ITO microelectrodes resulting in the localized release of the polymer thus revealing a bare ITO surface. Different polymer and biological moieties are specifically immobilized onto the activated ITO microelectrodes while the other regions remain protein-resistant as they are unaffected by the induced electrical potential. The desorption process of the PLL-g-PEG is observed to be highly selective, rapid, and reversible without compromising on the integrity and performance of the conductive ITO microelectrodes. As such, we have successfully created a stable and heterogeneous microarray of biomolecules by using selective electronic addressing on ITO microelectrodes. Both pharmaceutical diagnostics and biomedical technology are expected to benefit directly from this unique method.

  1. DNA microarray technique for detecting food-borne pathogens

    Directory of Open Access Journals (Sweden)

    Xing GAO

    2012-08-01

    Full Text Available Objective To study the application of DNA microarray technique for screening and identifying multiple food-borne pathogens. Methods The oligonucleotide probes were designed by Clustal X and Oligo 6.0 at the conserved regions of specific genes of multiple food-borne pathogens, and then were validated by bioinformatic analyses. The 5' end of each probe was modified by amino-group and 10 Poly-T, and the optimized probes were synthesized and spotted on aldehyde-coated slides. The bacteria DNA template incubated with Klenow enzyme was amplified by arbitrarily primed PCR, and PCR products incorporated into Aminoallyl-dUTP were coupled with fluorescent dye. After hybridization of the purified PCR products with DNA microarray, the hybridization image and fluorescence intensity analysis was acquired by ScanArray and GenePix Pro 5.1 software. A series of detection conditions such as arbitrarily primed PCR and microarray hybridization were optimized. The specificity of this approach was evaluated by 16 different bacteria DNA, and the sensitivity and reproducibility were verified by 4 food-borne pathogens DNA. The samples of multiple bacteria DNA and simulated water samples of Shigella dysenteriae were detected. Results Nine different food-borne bacteria were successfully discriminated under the same condition. The sensitivity of genomic DNA was 102 -103pg/ μl, and the coefficient of variation (CV of the reproducibility of assay was less than 15%. The corresponding specific hybridization maps of the multiple bacteria DNA samples were obtained, and the detection limit of simulated water sample of Shigella dysenteriae was 3.54×105cfu/ml. Conclusions The DNA microarray detection system based on arbitrarily primed PCR can be employed for effective detection of multiple food-borne pathogens, and this assay may offer a new method for high-throughput platform for detecting bacteria.

  2. An evaluation of two-channel ChIP-on-chip and DNA methylation microarray normalization strategies

    Science.gov (United States)

    2012-01-01

    Background The combination of chromatin immunoprecipitation with two-channel microarray technology enables genome-wide mapping of binding sites of DNA-interacting proteins (ChIP-on-chip) or sites with methylated CpG di-nucleotides (DNA methylation microarray). These powerful tools are the gateway to understanding gene transcription regulation. Since the goals of such studies, the sample preparation procedures, the microarray content and study design are all different from transcriptomics microarrays, the data pre-processing strategies traditionally applied to transcriptomics microarrays may not be appropriate. Particularly, the main challenge of the normalization of "regulation microarrays" is (i) to make the data of individual microarrays quantitatively comparable and (ii) to keep the signals of the enriched probes, representing DNA sequences from the precipitate, as distinguishable as possible from the signals of the un-enriched probes, representing DNA sequences largely absent from the precipitate. Results We compare several widely used normalization approaches (VSN, LOWESS, quantile, T-quantile, Tukey's biweight scaling, Peng's method) applied to a selection of regulation microarray datasets, ranging from DNA methylation to transcription factor binding and histone modification studies. Through comparison of the data distributions of control probes and gene promoter probes before and after normalization, and assessment of the power to identify known enriched genomic regions after normalization, we demonstrate that there are clear differences in performance between normalization procedures. Conclusion T-quantile normalization applied separately on the channels and Tukey's biweight scaling outperform other methods in terms of the conservation of enriched and un-enriched signal separation, as well as in identification of genomic regions known to be enriched. T-quantile normalization is preferable as it additionally improves comparability between microarrays. In

  3. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  4. The Neighboring Column Approximation (NCA) – A fast approach for the calculation of 3D thermal heating rates in cloud resolving models

    International Nuclear Information System (INIS)

    Klinger, Carolin; Mayer, Bernhard

    2016-01-01

    Due to computational costs, radiation is usually neglected or solved in plane parallel 1D approximation in today's numerical weather forecast and cloud resolving models. We present a fast and accurate method to calculate 3D heating and cooling rates in the thermal spectral range that can be used in cloud resolving models. The parameterization considers net fluxes across horizontal box boundaries in addition to the top and bottom boundaries. Since the largest heating and cooling rates occur inside the cloud, close to the cloud edge, the method needs in first approximation only the information if a grid box is at the edge of a cloud or not. Therefore, in order to calculate the heating or cooling rates of a specific grid box, only the directly neighboring columns are used. Our so-called Neighboring Column Approximation (NCA) is an analytical consideration of cloud side effects which can be considered a convolution of a 1D radiative transfer result with a kernel or radius of 1 grid-box (5 pt stencil) and which does usually not break the parallelization of a cloud resolving model. The NCA can be easily applied to any cloud resolving model that includes a 1D radiation scheme. Due to the neglect of horizontal transport of radiation further away than one model column, the NCA works best for model resolutions of about 100 m or lager. In this paper we describe the method and show a set of applications of LES cloud field snap shots. Correction terms, gains and restrictions of the NCA are described. Comprehensive comparisons to the 3D Monte Carlo Model MYSTIC and a 1D solution are shown. In realistic cloud fields, the full 3D simulation with MYSTIC shows cooling rates up to −150 K/d (100 m resolution) while the 1D solution shows maximum coolings of only −100 K/d. The NCA is capable of reproducing the larger 3D cooling rates. The spatial distribution of the heating and cooling is improved considerably. Computational costs are only a factor of 1.5–2 higher compared to a 1D

  5. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  6. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    Science.gov (United States)

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  7. Supervised group Lasso with applications to microarray data analysis

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2007-02-01

    Full Text Available Abstract Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods.

  8. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  9. Generalization of DNA microarray dispersion properties: microarray equivalent of t-distribution

    DEFF Research Database (Denmark)

    Novak, Jaroslav P; Kim, Seon-Young; Xu, Jun

    2006-01-01

    BACKGROUND: DNA microarrays are a powerful technology that can provide a wealth of gene expression data for disease studies, drug development, and a wide scope of other investigations. Because of the large volume and inherent variability of DNA microarray data, many new statistical methods have...

  10. Application of a New Genetic Deafness Microarray for Detecting Mutations in the Deaf in China.

    Directory of Open Access Journals (Sweden)

    Hong Wu

    Full Text Available The aim of this study was to evaluate the GoldenGate microarray as a diagnostic tool and to elucidate the contribution of the genes on this array to the development of both nonsyndromic and syndromic sensorineural hearing loss in China.We developed a microarray to detect 240 mutations underlying syndromic and nonsyndromic sensorineural hearing loss. The microarray was then used for analysis of 382 patients with nonsyndromic sensorineural hearing loss (including 15 patients with enlarged vestibular aqueduct syndrome, 21 patients with Waardenburg syndrome, and 60 unrelated controls. Subsequently, we analyzed the sensitivity, specificity, and reproducibility of this new approach after Sanger sequencing-based verification, and also determined the contribution of the genes on this array to the development of distinct hearing disorders.The sensitivity and specificity of the microarray chip were 98.73% and 98.34%, respectively. Genetic defects were identified in 61.26% of the patients with nonsyndromic sensorineural hearing loss, and 9 causative genes were identified. The molecular etiology was confirmed in 19.05% and 46.67% of the patients with Waardenburg syndrome and enlarged vestibular aqueduct syndrome, respectively.Our new mutation-based microarray comprises an accurate and comprehensive genetic tool for the detection of sensorineural hearing loss. This microarray-based detection method could serve as a first-pass screening (before next-generation-sequencing screening for deafness-causing mutations in China.

  11. Network Expansion and Pathway Enrichment Analysis towards Biologically Significant Findings from Microarrays

    Directory of Open Access Journals (Sweden)

    Wu Xiaogang

    2012-06-01

    Full Text Available In many cases, crucial genes show relatively slight changes between groups of samples (e.g. normal vs. disease, and many genes selected from microarray differential analysis by measuring the expression level statistically are also poorly annotated and lack of biological significance. In this paper, we present an innovative approach - network expansion and pathway enrichment analysis (NEPEA for integrative microarray analysis. We assume that organized knowledge will help microarray data analysis in significant ways, and the organized knowledge could be represented as molecular interaction networks or biological pathways. Based on this hypothesis, we develop the NEPEA framework based on network expansion from the human annotated and predicted protein interaction (HAPPI database, and pathway enrichment from the human pathway database (HPD. We use a recently-published microarray dataset (GSE24215 related to insulin resistance and type 2 diabetes (T2D as case study, since this study provided a thorough experimental validation for both genes and pathways identified computationally from classical microarray analysis and pathway analysis. We perform our NEPEA analysis for this dataset based on the results from the classical microarray analysis to identify biologically significant genes and pathways. Our findings are not only consistent with the original findings mostly, but also obtained more supports from other literatures.

  12. Parallel scan hyperspectral fluorescence imaging system and biomedical application for microarrays

    International Nuclear Information System (INIS)

    Liu Zhiyi; Ma Suihua; Liu Le; Guo Jihua; He Yonghong; Ji Yanhong

    2011-01-01

    Microarray research offers great potential for analysis of gene expression profile and leads to greatly improved experimental throughput. A number of instruments have been reported for microarray detection, such as chemiluminescence, surface plasmon resonance, and fluorescence markers. Fluorescence imaging is popular for the readout of microarrays. In this paper we develop a quasi-confocal, multichannel parallel scan hyperspectral fluorescence imaging system for microarray research. Hyperspectral imaging records the entire emission spectrum for every voxel within the imaged area in contrast to recording only fluorescence intensities of filter-based scanners. Coupled with data analysis, the recorded spectral information allows for quantitative identification of the contributions of multiple, spectrally overlapping fluorescent dyes and elimination of unwanted artifacts. The mechanism of quasi-confocal imaging provides a high signal-to-noise ratio, and parallel scan makes this approach a high throughput technique for microarray analysis. This system is improved with a specifically designed spectrometer which can offer a spectral resolution of 0.2 nm, and operates with spatial resolutions ranging from 2 to 30 μm . Finally, the application of the system is demonstrated by reading out microarrays for identification of bacteria.

  13. Nanotechnology: moving from microarrays toward nanoarrays.

    Science.gov (United States)

    Chen, Hua; Li, Jun

    2007-01-01

    Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.

  14. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  15. A New Approach of Asymmetric Homoclinic and Heteroclinic Orbits Construction in Several Typical Systems Based on the Undetermined Padé Approximation Method

    Directory of Open Access Journals (Sweden)

    Jingjing Feng

    2016-01-01

    Full Text Available In dynamic systems, some nonlinearities generate special connection problems of non-Z2 symmetric homoclinic and heteroclinic orbits. Such orbits are important for analyzing problems of global bifurcation and chaos. In this paper, a general analytical method, based on the undetermined Padé approximation method, is proposed to construct non-Z2 symmetric homoclinic and heteroclinic orbits which are affected by nonlinearity factors. Geometric and symmetrical characteristics of non-Z2 heteroclinic orbits are analyzed in detail. An undetermined frequency coefficient and a corresponding new analytic expression are introduced to improve the accuracy of the orbit trajectory. The proposed method shows high precision results for the Nagumo system (one single orbit; general types of non-Z2 symmetric nonlinear quintic systems (orbit with one cusp; and Z2 symmetric system with high-order nonlinear terms (orbit with two cusps. Finally, numerical simulations are used to verify the techniques and demonstrate the enhanced efficiency and precision of the proposed method.

  16. From microarray to biology: an integrated experimental, statistical and in silico analysis of how the extracellular matrix modulates the phenotype of cancer cells

    OpenAIRE

    Centola Michael B; Dozmorov Igor; Buethe David D; Saban Ricardo; Hauser Paul J; Kyker Kimberly D; Dozmorov Mikhail G; Culkin Daniel J; Hurst Robert E

    2008-01-01

    Abstract A statistically robust and biologically-based approach for analysis of microarray data is described that integrates independent biological knowledge and data with a global F-test for finding genes of interest that minimizes the need for replicates when used for hypothesis generation. First, each microarray is normalized to its noise level around zero. The microarray dataset is then globally adjusted by robust linear regression. Second, genes of interest that capture significant respo...

  17. A Java-based tool for the design of classification microarrays

    Directory of Open Access Journals (Sweden)

    Broschat Shira L

    2008-08-01

    Full Text Available Abstract Background Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. Results The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. Conclusion In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays–and mixed-plasmid microarrays in particular–it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm, several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text, and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff. Weights

  18. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  19. Face Recognition using Approximate Arithmetic

    DEFF Research Database (Denmark)

    Marso, Karol

    Face recognition is image processing technique which aims to identify human faces and found its use in various different fields for example in security. Throughout the years this field evolved and there are many approaches and many different algorithms which aim to make the face recognition as effective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....

  20. Towards the integration, annotation and association of historical microarray experiments with RNA-seq.

    Science.gov (United States)

    Chavan, Shweta S; Bauer, Michael A; Peterson, Erich A; Heuck, Christoph J; Johann, Donald J

    2013-01-01

    Transcriptome analysis by microarrays has produced important advances in biomedicine. For instance in multiple myeloma (MM), microarray approaches led to the development of an effective disease subtyping via cluster assignment, and a 70 gene risk score. Both enabled an improved molecular understanding of MM, and have provided prognostic information for the purposes of clinical management. Many researchers are now transitioning to Next Generation Sequencing (NGS) approaches and RNA-seq in particular, due to its discovery-based nature, improved sensitivity, and dynamic range. Additionally, RNA-seq allows for the analysis of gene isoforms, splice variants, and novel gene fusions. Given the voluminous amounts of historical microarray data, there is now a need to associate and integrate microarray and RNA-seq data via advanced bioinformatic approaches. Custom software was developed following a model-view-controller (MVC) approach to integrate Affymetrix probe set-IDs, and gene annotation information from a variety of sources. The tool/approach employs an assortment of strategies to integrate, cross reference, and associate microarray and RNA-seq datasets. Output from a variety of transcriptome reconstruction and quantitation tools (e.g., Cufflinks) can be directly integrated, and/or associated with Affymetrix probe set data, as well as necessary gene identifiers and/or symbols from a diversity of sources. Strategies are employed to maximize the annotation and cross referencing process. Custom gene sets (e.g., MM 70 risk score (GEP-70)) can be specified, and the tool can be directly assimilated into an RNA-seq pipeline. A novel bioinformatic approach to aid in the facilitation of both annotation and association of historic microarray data, in conjunction with richer RNA-seq data, is now assisting with the study of MM cancer biology.

  1. Development of a genotyping microarray for Usher syndrome.

    Science.gov (United States)

    Cremers, Frans P M; Kimberling, William J; Külm, Maigi; de Brouwer, Arjan P; van Wijk, Erwin; te Brinke, Heleen; Cremers, Cor W R J; Hoefsloot, Lies H; Banfi, Sandro; Simonelli, Francesca; Fleischhauer, Johannes C; Berger, Wolfgang; Kelley, Phil M; Haralambous, Elene; Bitner-Glindzicz, Maria; Webster, Andrew R; Saihan, Zubin; De Baere, Elfride; Leroy, Bart P; Silvestri, Giuliana; McKay, Gareth J; Koenekoop, Robert K; Millan, Jose M; Rosenberg, Thomas; Joensuu, Tarja; Sankila, Eeva-Marja; Weil, Dominique; Weston, Mike D; Wissinger, Bernd; Kremer, Hannie

    2007-02-01

    Usher syndrome, a combination of retinitis pigmentosa (RP) and sensorineural hearing loss with or without vestibular dysfunction, displays a high degree of clinical and genetic heterogeneity. Three clinical subtypes can be distinguished, based on the age of onset and severity of the hearing impairment, and the presence or absence of vestibular abnormalities. Thus far, eight genes have been implicated in the syndrome, together comprising 347 protein-coding exons. To improve DNA diagnostics for patients with Usher syndrome, we developed a genotyping microarray based on the arrayed primer extension (APEX) method. Allele-specific oligonucleotides corresponding to all 298 Usher syndrome-associated sequence variants known to date, 76 of which are novel, were arrayed. Approximately half of these variants were validated using original patient DNAs, which yielded an accuracy of >98%. The efficiency of the Usher genotyping microarray was tested using DNAs from 370 unrelated European and American patients with Usher syndrome. Sequence variants were identified in 64/140 (46%) patients with Usher syndrome type I, 45/189 (24%) patients with Usher syndrome type II, 6/21 (29%) patients with Usher syndrome type III and 6/20 (30%) patients with atypical Usher syndrome. The chip also identified two novel sequence variants, c.400C>T (p.R134X) in PCDH15 and c.1606T>C (p.C536S) in USH2A. The Usher genotyping microarray is a versatile and affordable screening tool for Usher syndrome. Its efficiency will improve with the addition of novel sequence variants with minimal extra costs, making it a very useful first-pass screening tool.

  2. Development of a genotyping microarray for Usher syndrome

    Science.gov (United States)

    Cremers, Frans P M; Kimberling, William J; Külm, Maigi; de Brouwer, Arjan P; van Wijk, Erwin; te Brinke, Heleen; Cremers, Cor W R J; Hoefsloot, Lies H; Banfi, Sandro; Simonelli, Francesca; Fleischhauer, Johannes C; Berger, Wolfgang; Kelley, Phil M; Haralambous, Elene; Bitner‐Glindzicz, Maria; Webster, Andrew R; Saihan, Zubin; De Baere, Elfride; Leroy, Bart P; Silvestri, Giuliana; McKay, Gareth J; Koenekoop, Robert K; Millan, Jose M; Rosenberg, Thomas; Joensuu, Tarja; Sankila, Eeva‐Marja; Weil, Dominique; Weston, Mike D; Wissinger, Bernd; Kremer, Hannie

    2007-01-01

    Background Usher syndrome, a combination of retinitis pigmentosa (RP) and sensorineural hearing loss with or without vestibular dysfunction, displays a high degree of clinical and genetic heterogeneity. Three clinical subtypes can be distinguished, based on the age of onset and severity of the hearing impairment, and the presence or absence of vestibular abnormalities. Thus far, eight genes have been implicated in the syndrome, together comprising 347 protein‐coding exons. Methods: To improve DNA diagnostics for patients with Usher syndrome, we developed a genotyping microarray based on the arrayed primer extension (APEX) method. Allele‐specific oligonucleotides corresponding to all 298 Usher syndrome‐associated sequence variants known to date, 76 of which are novel, were arrayed. Results Approximately half of these variants were validated using original patient DNAs, which yielded an accuracy of >98%. The efficiency of the Usher genotyping microarray was tested using DNAs from 370 unrelated European and American patients with Usher syndrome. Sequence variants were identified in 64/140 (46%) patients with Usher syndrome type I, 45/189 (24%) patients with Usher syndrome type II, 6/21 (29%) patients with Usher syndrome type III and 6/20 (30%) patients with atypical Usher syndrome. The chip also identified two novel sequence variants, c.400C>T (p.R134X) in PCDH15 and c.1606T>C (p.C536S) in USH2A. Conclusion The Usher genotyping microarray is a versatile and affordable screening tool for Usher syndrome. Its efficiency will improve with the addition of novel sequence variants with minimal extra costs, making it a very useful first‐pass screening tool. PMID:16963483

  3. Accurate detection of carcinoma cells by use of a cell microarray chip.

    Directory of Open Access Journals (Sweden)

    Shohei Yamamura

    Full Text Available BACKGROUND: Accurate detection and analysis of circulating tumor cells plays an important role in the diagnosis and treatment of metastatic cancer treatment. METHODS AND FINDINGS: A cell microarray chip was used to detect spiked carcinoma cells among leukocytes. The chip, with 20,944 microchambers (105 µm width and 50 µm depth, was made from polystyrene; and the formation of monolayers of leukocytes in the microchambers was observed. Cultured human T lymphoblastoid leukemia (CCRF-CEM cells were used to examine the potential of the cell microarray chip for the detection of spiked carcinoma cells. A T lymphoblastoid leukemia suspension was dispersed on the chip surface, followed by 15 min standing to allow the leukocytes to settle down into the microchambers. Approximately 29 leukocytes were found in each microchamber when about 600,000 leukocytes in total were dispersed onto a cell microarray chip. Similarly, when leukocytes isolated from human whole blood were used, approximately 89 leukocytes entered each microchamber when about 1,800,000 leukocytes in total were placed onto the cell microarray chip. After washing the chip surface, PE-labeled anti-cytokeratin monoclonal antibody and APC-labeled anti-CD326 (EpCAM monoclonal antibody solution were dispersed onto the chip surface and allowed to react for 15 min; and then a microarray scanner was employed to detect any fluorescence-positive cells within 20 min. In the experiments using spiked carcinoma cells (NCI-H1650, 0.01 to 0.0001%, accurate detection of carcinoma cells was achieved with PE-labeled anti-cytokeratin monoclonal antibody. Furthermore, verification of carcinoma cells in the microchambers was performed by double staining with the above monoclonal antibodies. CONCLUSION: The potential application of the cell microarray chip for the detection of CTCs was shown, thus demonstrating accurate detection by double staining for cytokeratin and EpCAM at the single carcinoma cell level.

  4. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  5. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  6. Approximations of Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Vinai K. Singh

    2013-03-01

    Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions

  7. General Rytov approximation.

    Science.gov (United States)

    Potvin, Guy

    2015-10-01

    We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.

  8. Development and application of an oligonucleotide microarray and real-time quantitative PCR for detection of wastewater bacterial pathogens

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dae-Young [National Water Research Institute, Environment Canada, 867 Lakeshore Road, Burlington, Ontario, L7R 4A6 (Canada)], E-mail: daeyoung.lee@ec.gc.ca; Lauder, Heather; Cruwys, Heather; Falletta, Patricia [National Water Research Institute, Environment Canada, 867 Lakeshore Road, Burlington, Ontario, L7R 4A6 (Canada); Beaudette, Lee A. [Environmental Science and Technology Centre, Environment Canada, 335 River Road South, Ottawa, Ontario, K1A 0H3 (Canada)], E-mail: lee.beaudette@ec.gc.ca

    2008-07-15

    Conventional microbial water quality test methods are well known for their technical limitations, such as lack of direct pathogen detection capacity and low throughput capability. The microarray assay has recently emerged as a promising alternative for environmental pathogen monitoring. In this study, bacterial pathogens were detected in municipal wastewater using a microarray equipped with short oligonucleotide probes targeting 16S rRNA sequences. To date, 62 probes have been designed against 38 species, 4 genera, and 1 family of pathogens. The detection sensitivity of the microarray for a waterborne pathogen Aeromonas hydrophila was determined to be approximately 1.0% of the total DNA, or approximately 10{sup 3}A. hydrophila cells per sample. The efficacy of the DNA microarray was verified in a parallel study where pathogen genes and E. coli cells were enumerated using real-time quantitative PCR (qPCR) and standard membrane filter techniques, respectively. The microarray and qPCR successfully detected multiple wastewater pathogen species at different stages of the disinfection process (i.e. secondary effluents vs. disinfected final effluents) and at two treatment plants employing different disinfection methods (i.e. chlorination vs. UV irradiation). This result demonstrates the effectiveness of the DNA microarray as a semi-quantitative, high throughput pathogen monitoring tool for municipal wastewater.

  9. The use of microarrays in microbial ecology

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, G.L.; He, Z.; DeSantis, T.Z.; Brodie, E.L.; Zhou, J.

    2009-09-15

    Microarrays have proven to be a useful and high-throughput method to provide targeted DNA sequence information for up to many thousands of specific genetic regions in a single test. A microarray consists of multiple DNA oligonucleotide probes that, under high stringency conditions, hybridize only to specific complementary nucleic acid sequences (targets). A fluorescent signal indicates the presence and, in many cases, the abundance of genetic regions of interest. In this chapter we will look at how microarrays are used in microbial ecology, especially with the recent increase in microbial community DNA sequence data. Of particular interest to microbial ecologists, phylogenetic microarrays are used for the analysis of phylotypes in a community and functional gene arrays are used for the analysis of functional genes, and, by inference, phylotypes in environmental samples. A phylogenetic microarray that has been developed by the Andersen laboratory, the PhyloChip, will be discussed as an example of a microarray that targets the known diversity within the 16S rRNA gene to determine microbial community composition. Using multiple, confirmatory probes to increase the confidence of detection and a mismatch probe for every perfect match probe to minimize the effect of cross-hybridization by non-target regions, the PhyloChip is able to simultaneously identify any of thousands of taxa present in an environmental sample. The PhyloChip is shown to reveal greater diversity within a community than rRNA gene sequencing due to the placement of the entire gene product on the microarray compared with the analysis of up to thousands of individual molecules by traditional sequencing methods. A functional gene array that has been developed by the Zhou laboratory, the GeoChip, will be discussed as an example of a microarray that dynamically identifies functional activities of multiple members within a community. The recent version of GeoChip contains more than 24,000 50mer

  10. 3D Biomaterial Microarrays for Regenerative Medicine

    DEFF Research Database (Denmark)

    Gaharwar, Akhilesh K.; Arpanaei, Ayyoob; Andresen, Thomas Lars

    2015-01-01

    Three dimensional (3D) biomaterial microarrays hold enormous promise for regenerative medicine because of their ability to accelerate the design and fabrication of biomimetic materials. Such tissue-like biomaterials can provide an appropriate microenvironment for stimulating and controlling stem...... for tissue engineering and drug screening applications....... cell differentiation into tissue-specifi c lineages. The use of 3D biomaterial microarrays can, if optimized correctly, result in a more than 1000-fold reduction in biomaterials and cells consumption when engineering optimal materials combinations, which makes these miniaturized systems very attractive...

  11. Simple approach to approximate predictions of the vapor–liquid equilibrium curve near the critical point and its application to Lennard-Jones fluids

    International Nuclear Information System (INIS)

    Staśkiewicz, B.; Okrasiński, W.

    2012-01-01

    We propose a simple analytical form of the vapor–liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described. -- Highlights: ► We propose a new analytical way to determine the VLE curve. ► Simple, mathematically straightforward form of phase curves is presented. ► Comparison with experimental data is discussed. ► The accuracy of the method has been confirmed.

  12. Development and application of a microarray meter tool to optimize microarray experiments

    Directory of Open Access Journals (Sweden)

    Rouse Richard JD

    2008-07-01

    Full Text Available Abstract Background Successful microarray experimentation requires a complex interplay between the slide chemistry, the printing pins, the nucleic acid probes and targets, and the hybridization milieu. Optimization of these parameters and a careful evaluation of emerging slide chemistries are a prerequisite to any large scale array fabrication effort. We have developed a 'microarray meter' tool which assesses the inherent variations associated with microarray measurement prior to embarking on large scale projects. Findings The microarray meter consists of nucleic acid targets (reference and dynamic range control and probe components. Different plate designs containing identical probe material were formulated to accommodate different robotic and pin designs. We examined the variability in probe quality and quantity (as judged by the amount of DNA printed and remaining post-hybridization using three robots equipped with capillary printing pins. Discussion The generation of microarray data with minimal variation requires consistent quality control of the (DNA microarray manufacturing and experimental processes. Spot reproducibility is a measure primarily of the variations associated with printing. The microarray meter assesses array quality by measuring the DNA content for every feature. It provides a post-hybridization analysis of array quality by scoring probe performance using three metrics, a a measure of variability in the signal intensities, b a measure of the signal dynamic range and c a measure of variability of the spot morphologies.

  13. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  14. INTOR cost approximation

    International Nuclear Information System (INIS)

    Knobloch, A.F.

    1980-01-01

    A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de

  15. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  16. Microarray Я US: a user-friendly graphical interface to Bioconductor tools that enables accurate microarray data analysis and expedites comprehensive functional analysis of microarray results.

    Science.gov (United States)

    Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu

    2012-06-08

    Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.

  17. Design issues in toxicogenomics using DNA microarray experiment

    International Nuclear Information System (INIS)

    Lee, Kyoung-Mu; Kim, Ju-Han; Kang, Daehee

    2005-01-01

    The methods of toxicogenomics might be classified into omics study (e.g., genomics, proteomics, and metabolomics) and population study focusing on risk assessment and gene-environment interaction. In omics study, microarray is the most popular approach. Genes falling into several categories (e.g., xenobiotics metabolism, cell cycle control, DNA repair etc.) can be selected up to 20,000 according to a priori hypothesis. The appropriate type of samples and species should be selected in advance. Multiple doses and varied exposure durations are suggested to identify those genes clearly linked to toxic response. Microarray experiments can be affected by numerous nuisance variables including experimental designs, sample extraction, type of scanners, etc. The number of slides might be determined from the magnitude and variance of expression change, false-positive rate, and desired power. Instead, pooling samples is an alternative. Online databases on chemicals with known exposure-disease outcomes and genetic information can aid the interpretation of the normalized results. Gene function can be inferred from microarray data analyzed by bioinformatics methods such as cluster analysis. The population study often adopts hospital-based or nested case-control design. Biases in subject selection and exposure assessment should be minimized, and confounding bias should also be controlled for in stratified or multiple regression analysis. Optimal sample sizes are dependent on the statistical test for gene-to-environment or gene-to-gene interaction. The design issues addressed in this mini-review are crucial in conducting toxicogenomics study. In addition, integrative approach of exposure assessment, epidemiology, and clinical trial is required

  18. Quantitative inference of dynamic regulatory pathways via microarray data

    Directory of Open Access Journals (Sweden)

    Chen Bor-Sen

    2005-03-01

    Full Text Available Abstract Background The cellular signaling pathway (network is one of the main topics of organismic investigations. The intracellular interactions between genes in a signaling pathway are considered as the foundation of functional genomics. Thus, what genes and how much they influence each other through transcriptional binding or physical interactions are essential problems. Under the synchronous measures of gene expression via a microarray chip, an amount of dynamic information is embedded and remains to be discovered. Using a systematically dynamic modeling approach, we explore the causal relationship among genes in cellular signaling pathways from the system biology approach. Results In this study, a second-order dynamic model is developed to describe the regulatory mechanism of a target gene from the upstream causality point of view. From the expression profile and dynamic model of a target gene, we can estimate its upstream regulatory function. According to this upstream regulatory function, we would deduce the upstream regulatory genes with their regulatory abilities and activation delays, and then link up a regulatory pathway. Iteratively, these regulatory genes are considered as target genes to trace back their upstream regulatory genes. Then we could construct the regulatory pathway (or network to the genome wide. In short, we can infer the genetic regulatory pathways from gene-expression profiles quantitatively, which can confirm some doubted paths or seek some unknown paths in a regulatory pathway (network. Finally, the proposed approach is validated by randomly reshuffling the time order of microarray data. Conclusion We focus our algorithm on the inference of regulatory abilities of the identified causal genes, and how much delay before they regulate the downstream genes. With this information, a regulatory pathway would be built up using microarray data. In the present study, two signaling pathways, i.e. circadian regulatory

  19. Integration of microarray analysis into the clinical diagnosis of hematological malignancies: How much can we improve cytogenetic testing?

    Science.gov (United States)

    Peterson, Jess F.; Aggarwal, Nidhi; Smith, Clayton A.; Gollin, Susanne M.; Surti, Urvashi; Rajkovic, Aleksandar; Swerdlow, Steven H.; Yatsenko, Svetlana A.

    2015-01-01

    Purpose To evaluate the clinical utility, diagnostic yield and rationale of integrating microarray analysis in the clinical diagnosis of hematological malignancies in comparison with classical chromosome karyotyping/fluorescence in situ hybridization (FISH). Methods G-banded chromosome analysis, FISH and microarray studies using customized CGH and CGH+SNP designs were performed on 27 samples from patients with hematological malignancies. A comprehensive comparison of the results obtained by three methods was conducted to evaluate benefits and limitations of these techniques for clinical diagnosis. Results Overall, 89.7% of chromosomal abnormalities identified by karyotyping/FISH studies were also detectable by microarray. Among 183 acquired copy number alterations (CNAs) identified by microarray, 94 were additional findings revealed in 14 cases (52%), and at least 30% of CNAs were in genomic regions of diagnostic/prognostic significance. Approximately 30% of novel alterations detected by microarray were >20 Mb in size. Balanced abnormalities were not detected by microarray; however, of the 19 apparently “balanced” rearrangements, 55% (6/11) of recurrent and 13% (1/8) of non-recurrent translocations had alterations at the breakpoints discovered by microarray. Conclusion Microarray technology enables accurate, cost-effective and time-efficient whole-genome analysis at a resolution significantly higher than that of conventional karyotyping and FISH. Array-CGH showed advantage in identification of cryptic imbalances and detection of clonal aberrations in population of non-dividing cancer cells and samples with poor chromosome morphology. The integration of microarray analysis into the cytogenetic diagnosis of hematologic malignancies has the potential to improve patient management by providing clinicians with additional disease specific and potentially clinically actionable genomic alterations. PMID:26299921

  20. Creation of antifouling microarrays by photopolymerization of zwitterionic compounds for protein assay and cell patterning.

    Science.gov (United States)

    Sun, Xiuhua; Wang, Huaixin; Wang, Yuanyuan; Gui, Taijiang; Wang, Ke; Gao, Changlu

    2018-04-15

    Nonspecific binding or adsorption of biomolecules presents as a major obstacle to higher sensitivity, specificity and reproducibility in microarray technology. We report herein a method to fabricate antifouling microarray via photopolymerization of biomimetic betaine compounds. In brief, carboxybetaine methacrylate was polymerized as arrays for protein sensing, while sulfobetaine methacrylate was polymerized as background. With the abundant carboxyl groups on array surfaces and zwitterionic polymers on the entire surfaces, this microarray allows biomolecular immobilization and recognition with low nonspecific interactions due to its antifouling property. Therefore, low concentration of target molecules can be captured and detected by this microarray. It was proved that a concentration of 10ngmL -1 bovine serum albumin in the sample matrix of bovine serum can be detected by the microarray derivatized with anti-bovine serum albumin. Moreover, with proper hydrophilic-hydrophobic designs, this approach can be applied to fabricate surface-tension droplet arrays, which allows surface-directed cell adhesion and growth. These light controllable approaches constitute a clear improvement in the design of antifouling interfaces, which may lead to greater flexibility in the development of interfacial architectures and wider application in blood contact microdevices. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Detection of selected plant viruses by microarrays

    OpenAIRE

    HRABÁKOVÁ, Lenka

    2013-01-01

    The main aim of this master thesis was the simultaneous detection of four selected plant viruses ? Apple mosaic virus, Plum pox virus, Prunus necrotic ringspot virus and Prune harf virus, by microarrays. The intermediate step in the process of the detection was optimizing of multiplex polymerase chain reaction (PCR).

  2. LNA-modified isothermal oligonucleotide microarray for ...

    Indian Academy of Sciences (India)

    2014-10-20

    Oct 20, 2014 ... the advent of DNA microarray techniques (Lee et al. 2007). ... atoms of ribose to form a bicyclic ribosyl structure. It is the .... 532 nm and emission at 570 nm. The signal ..... sis and validation using real-time PCR. Nucleic Acids ...

  3. Gene Expression Analysis Using Agilent DNA Microarrays

    DEFF Research Database (Denmark)

    Stangegaard, Michael

    2009-01-01

    Hybridization of labeled cDNA to microarrays is an intuitively simple and a vastly underestimated process. If it is not performed, optimized, and standardized with the same attention to detail as e.g., RNA amplification, information may be overlooked or even lost. Careful balancing of the amount ...

  4. Microarrays (DNA Chips) for the Classroom Laboratory

    Science.gov (United States)

    Barnard, Betsy; Sussman, Michael; BonDurant, Sandra Splinter; Nienhuis, James; Krysan, Patrick

    2006-01-01

    We have developed and optimized the necessary laboratory materials to make DNA microarray technology accessible to all high school students at a fraction of both cost and data size. The primary component is a DNA chip/array that students "print" by hand and then analyze using research tools that have been adapted for classroom use. The…

  5. Comparing transformation methods for DNA microarray data

    NARCIS (Netherlands)

    Thygesen, Helene H.; Zwinderman, Aeilko H.

    2004-01-01

    Background: When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include

  6. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  7. Facilitating functional annotation of chicken microarray data

    Directory of Open Access Journals (Sweden)

    Gresham Cathy R

    2009-10-01

    Full Text Available Abstract Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO. However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and

  8. On Covering Approximation Subspaces

    Directory of Open Access Journals (Sweden)

    Xun Ge

    2009-06-01

    Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

  9. Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.

    Science.gov (United States)

    Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J

    2008-06-18

    correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.

  10. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  11. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  12. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  13. Approximating The DCM

    DEFF Research Database (Denmark)

    Madsen, Rasmus Elsborg

    2005-01-01

    The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...

  14. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  15. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  16. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  17. Analysis of corrections to the eikonal approximation

    Science.gov (United States)

    Hebborn, C.; Capel, P.

    2017-11-01

    Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.

  18. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  19. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  20. Use of the cDNA microarray technology in thesafety assessment of GM food plants

    DEFF Research Database (Denmark)

    Pedersen, Jan W.; Knudsen, Ib; Eriksen, Folmer Damsted

    This report focuses on new analytical approaches that might give more insight into possible changes in a genetically modified plant. Primarily the focus is on the new DNA microarray technique but also proteomics and metabolomics are discussed.The report describes the new techniques and evaluates ...

  1. Predicting incomplete gene microarray data with the use of supervised learning algorithms

    CSIR Research Space (South Africa)

    Twala, B

    2010-10-01

    Full Text Available that prediction using supervised learning can be improved in probabilistic terms given incomplete microarray data. This imputation approach is based on the a priori probability of each value determined from the instances at that node of a decision tree (PDT...

  2. Use of the cDNA microarray technology in the safety assessment of GM food plants

    NARCIS (Netherlands)

    Kok, E.J.; Kleter, G.A.; Dijk, van J.P.

    2003-01-01

    This report focuses on new analytical approaches that might give more insight into possible changes in a genetically modified plant. Primarily the focus is on the new DNA microarray technique but also proteomics and metabolomics are discussed.The report describes the new techniques and evaluates the

  3. Functional microarray analysis of nitrogen and carbon cycling genes across an Antarctic latitudinal transect.

    NARCIS (Netherlands)

    Yergeau, E.; Kang, S.; He, Z.; Zhou, J.; Kowalchuk, G.A.

    2007-01-01

    Soil-borne microbial communities were examined via a functional gene microarray approach across a southern polar latitudinal gradient to gain insight into the environmental factors steering soil N- and C-cycling in terrestrial Antarctic ecosystems. The abundance and diversity of functional gene

  4. An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays

    Directory of Open Access Journals (Sweden)

    Laurenzi Ian J

    2009-12-01

    Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.

  5. MiMiR: a comprehensive solution for storage, annotation and exchange of microarray data

    Directory of Open Access Journals (Sweden)

    Rahman Fatimah

    2005-11-01

    Full Text Available Abstract Background The generation of large amounts of microarray data presents challenges for data collection, annotation, exchange and analysis. Although there are now widely accepted formats, minimum standards for data content and ontologies for microarray data, only a few groups are using them together to build and populate large-scale databases. Structured environments for data management are crucial for making full use of these data. Description The MiMiR database provides a comprehensive infrastructure for microarray data annotation, storage and exchange and is based on the MAGE format. MiMiR is MIAME-supportive, customised for use with data generated on the Affymetrix platform and includes a tool for data annotation using ontologies. Detailed information on the experiment, methods, reagents and signal intensity data can be captured in a systematic format. Reports screens permit the user to query the database, to view annotation on individual experiments and provide summary statistics. MiMiR has tools for automatic upload of the data from the microarray scanner and export to databases using MAGE-ML. Conclusion MiMiR facilitates microarray data management, annotation and exchange, in line with international guidelines. The database is valuable for underpinning research activities and promotes a systematic approach to data handling. Copies of MiMiR are freely available to academic groups under licence.

  6. Improved microarray-based decision support with graph encoded interactome data.

    Directory of Open Access Journals (Sweden)

    Anneleen Daemen

    Full Text Available In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG, protein-protein interactions (OPHID and miRNA-gene targeting (microRNA.org outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.

  7. Some properties of dual and approximate dual of fusion frames

    OpenAIRE

    Arefijamaal, Ali Akbar; Neyshaburi, Fahimeh Arabyani

    2016-01-01

    In this paper we extend the notion of approximate dual to fusion frames and present some approaches to obtain dual and approximate alternate dual fusion frames. Also, we study the stability of dual and approximate alternate dual fusion frames.

  8. Distributed Approximating Functional Approach to Burgers' Equation ...

    African Journals Online (AJOL)

    This equation is similar to, but simpler than, the Navier-Stokes equation in fluid dynamics. To verify this advantage through some comparison studies, an exact series solution are also obtained. In addition, the presented scheme has numerically stable behavior. After demonstrating the convergence and accuracy of the ...

  9. Derivation of the RPA (Random Phase Approximation) Equation of ATDDFT (Adiabatic Time Dependent Density Functional Ground State Response Theory) from an Excited State Variational Approach Based on the Ground State Functional.

    Science.gov (United States)

    Ziegler, Tom; Krykunov, Mykhaylo; Autschbach, Jochen

    2014-09-09

    The random phase approximation (RPA) equation of adiabatic time dependent density functional ground state response theory (ATDDFT) has been used extensively in studies of excited states. It extracts information about excited states from frequency dependent ground state response properties and avoids, thus, in an elegant way, direct Kohn-Sham calculations on excited states in accordance with the status of DFT as a ground state theory. Thus, excitation energies can be found as resonance poles of frequency dependent ground state polarizability from the eigenvalues of the RPA equation. ATDDFT is approximate in that it makes use of a frequency independent energy kernel derived from the ground state functional. It is shown in this study that one can derive the RPA equation of ATDDFT from a purely variational approach in which stationary states above the ground state are located using our constricted variational DFT (CV-DFT) method and the ground state functional. Thus, locating stationary states above the ground state due to one-electron excitations with a ground state functional is completely equivalent to solving the RPA equation of TDDFT employing the same functional. The present study is an extension of a previous work in which we demonstrated the equivalence between ATDDFT and CV-DFT within the Tamm-Dancoff approximation.

  10. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  11. The Use of Atomic Force Microscopy for 3D Analysis of Nucleic Acid Hybridization on Microarrays.

    Science.gov (United States)

    Dubrovin, E V; Presnova, G V; Rubtsova, M Yu; Egorov, A M; Grigorenko, V G; Yaminsky, I V

    2015-01-01

    Oligonucleotide microarrays are considered today to be one of the most efficient methods of gene diagnostics. The capability of atomic force microscopy (AFM) to characterize the three-dimensional morphology of single molecules on a surface allows one to use it as an effective tool for the 3D analysis of a microarray for the detection of nucleic acids. The high resolution of AFM offers ways to decrease the detection threshold of target DNA and increase the signal-to-noise ratio. In this work, we suggest an approach to the evaluation of the results of hybridization of gold nanoparticle-labeled nucleic acids on silicon microarrays based on an AFM analysis of the surface both in air and in liquid which takes into account of their three-dimensional structure. We suggest a quantitative measure of the hybridization results which is based on the fraction of the surface area occupied by the nanoparticles.

  12. Detecting Outlier Microarray Arrays by Correlation and Percentage of Outliers Spots

    Directory of Open Access Journals (Sweden)

    Song Yang

    2006-01-01

    Full Text Available We developed a quality assurance (QA tool, namely microarray outlier filter (MOF, and have applied it to our microarray datasets for the identification of problematic arrays. Our approach is based on the comparison of the arrays using the correlation coefficient and the number of outlier spots generated on each array to reveal outlier arrays. For a human universal reference (HUR dataset, which is used as a technical control in our standard hybridization procedure, 3 outlier arrays were identified out of 35 experiments. For a human blood dataset, 12 outlier arrays were identified from 185 experiments. In general, arrays from human blood samples displayed greater variation in their gene expression profiles than arrays from HUR samples. As a result, MOF identified two distinct patterns in the occurrence of outlier arrays. These results demonstrate that this methodology is a valuable QA practice to identify questionable microarray data prior to downstream analysis.

  13. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  14. Approximating Preemptive Stochastic Scheduling

    OpenAIRE

    Megow Nicole; Vredeveld Tjark

    2009-01-01

    We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...

  15. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  16. Hydrogen: Beyond the Classic Approximation

    International Nuclear Information System (INIS)

    Scivetti, Ivan

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  17. Reverse phase protein microarray technology in traumatic brain injury.

    Science.gov (United States)

    Gyorgy, Andrea B; Walker, John; Wingo, Dan; Eidelman, Ofer; Pollard, Harvey B; Molnar, Andras; Agoston, Denes V

    2010-09-30

    Antibody based, high throughput proteomics technology represents an exciting new approach in understanding the pathobiologies of complex disorders such as cancer, stroke and traumatic brain injury. Reverse phase protein microarray (RPPA) can complement the classical methods based on mass spectrometry as a high throughput validation and quantification method. RPPA technology can address problematic issues, such as sample complexity, sensitivity, quantification, reproducibility and throughput, which are currently associated with mass spectrometry-based approaches. However, there are technical challenges, predominantly associated with the selection and use of antibodies, preparation and representation of samples and with analyzing and quantifying primary RPPA data. Here we present ways to identify and overcome some of the current issues associated with RPPA. We believe that using stringent quality controls, improved bioinformatics analysis and interpretation of primary RPPA data, this method will significantly contribute in generating new level of understanding about complex disorders at the level of systems biology. Published by Elsevier B.V.

  18. DNA Microarrays in Comparative Genomics and Transcriptomics

    DEFF Research Database (Denmark)

    Willenbrock, Hanni

    2007-01-01

    at identifying the exact breakpoints where DNA has been gained or lost. In this thesis, three popular methods are compared and a realistic simulation model is presented for generating artificial data with known breakpoints and known DNA copy number. By using simulated data, we obtain a realistic evaluation......During the past few years, innovations in the DNA sequencing technology has led to an explosion in available DNA sequence information. This has revolutionized biological research and promoted the development of high throughput analysis methods that can take advantage of the vast amount of sequence...... data. For this, the DNA microarray technology has gained enormous popularity due to its ability to measure the presence or the activity of thousands of genes simultaneously. Microarrays for high throughput data analyses are not limited to a few organisms but may be applied to everything from bacteria...

  19. Immobilization Techniques for Microarray: Challenges and Applications

    Directory of Open Access Journals (Sweden)

    Satish Balasaheb Nimse

    2014-11-01

    Full Text Available The highly programmable positioning of molecules (biomolecules, nanoparticles, nanobeads, nanocomposites materials on surfaces has potential applications in the fields of biosensors, biomolecular electronics, and nanodevices. However, the conventional techniques including self-assembled monolayers fail to position the molecules on the nanometer scale to produce highly organized monolayers on the surface. The present article elaborates different techniques for the immobilization of the biomolecules on the surface to produce microarrays and their diagnostic applications. The advantages and the drawbacks of various methods are compared. This article also sheds light on the applications of the different technologies for the detection and discrimination of viral/bacterial genotypes and the detection of the biomarkers. A brief survey with 115 references covering the last 10 years on the biological applications of microarrays in various fields is also provided.

  20. Facilitating RNA structure prediction with microarrays.

    Science.gov (United States)

    Kierzek, Elzbieta; Kierzek, Ryszard; Turner, Douglas H; Catrina, Irina E

    2006-01-17

    Determining RNA secondary structure is important for understanding structure-function relationships and identifying potential drug targets. This paper reports the use of microarrays with heptamer 2'-O-methyl oligoribonucleotides to probe the secondary structure of an RNA and thereby improve the prediction of that secondary structure. When experimental constraints from hybridization results are added to a free-energy minimization algorithm, the prediction of the secondary structure of Escherichia coli 5S rRNA improves from 27 to 92% of the known canonical base pairs. Optimization of buffer conditions for hybridization and application of 2'-O-methyl-2-thiouridine to enhance binding and improve discrimination between AU and GU pairs are also described. The results suggest that probing RNA with oligonucleotide microarrays can facilitate determination of secondary structure.

  1. Plasmonically amplified fluorescence bioassay with microarray format

    Science.gov (United States)

    Gogalic, S.; Hageneder, S.; Ctortecka, C.; Bauch, M.; Khan, I.; Preininger, Claudia; Sauer, U.; Dostalek, J.

    2015-05-01

    Plasmonic amplification of fluorescence signal in bioassays with microarray detection format is reported. A crossed relief diffraction grating was designed to couple an excitation laser beam to surface plasmons at the wavelength overlapping with the absorption and emission bands of fluorophore Dy647 that was used as a label. The surface of periodically corrugated sensor chip was coated with surface plasmon-supporting gold layer and a thin SU8 polymer film carrying epoxy groups. These groups were employed for the covalent immobilization of capture antibodies at arrays of spots. The plasmonic amplification of fluorescence signal on the developed microarray chip was tested by using interleukin 8 sandwich immunoassay. The readout was performed ex situ after drying the chip by using a commercial scanner with high numerical aperture collecting lens. Obtained results reveal the enhancement of fluorescence signal by a factor of 5 when compared to a regular glass chip.

  2. Evaluation of artificial time series microarray data for dynamic gene regulatory network inference.

    Science.gov (United States)

    Xenitidis, P; Seimenis, I; Kakolyris, S; Adamopoulos, A

    2017-08-07

    High-throughput technology like microarrays is widely used in the inference of gene regulatory networks (GRNs). We focused on time series data since we are interested in the dynamics of GRNs and the identification of dynamic networks. We evaluated the amount of information that exists in artificial time series microarray data and the ability of an inference process to produce accurate models based on them. We used dynamic artificial gene regulatory networks in order to create artificial microarray data. Key features that characterize microarray data such as the time separation of directly triggered genes, the percentage of directly triggered genes and the triggering function type were altered in order to reveal the limits that are imposed by the nature of microarray data on the inference process. We examined the effect of various factors on the inference performance such as the network size, the presence of noise in microarray data, and the network sparseness. We used a system theory approach and examined the relationship between the pole placement of the inferred system and the inference performance. We examined the relationship between the inference performance in the time domain and the true system parameter identification. Simulation results indicated that time separation and the percentage of directly triggered genes are crucial factors. Also, network sparseness, the triggering function type and noise in input data affect the inference performance. When two factors were simultaneously varied, it was found that variation of one parameter significantly affects the dynamic response of the other. Crucial factors were also examined using a real GRN and acquired results confirmed simulation findings with artificial data. Different initial conditions were also used as an alternative triggering approach. Relevant results confirmed that the number of datasets constitutes the most significant parameter with regard to the inference performance. Copyright © 2017 Elsevier

  3. Tissue Microarray Analysis Applied to Bone Diagenesis

    OpenAIRE

    Barrios Mello, Rafael; Regis Silva, Maria Regina; Seixas Alves, Maria Teresa; Evison, Martin; Guimarães, Marco Aurélio; Francisco, Rafaella Arrabaça; Dias Astolphi, Rafael; Miazato Iwamura, Edna Sadayo

    2017-01-01

    Taphonomic processes affecting bone post mortem are important in forensic, archaeological and palaeontological investigations. In this study, the application of tissue microarray (TMA) analysis to a sample of femoral bone specimens from 20 exhumed individuals of known period of burial and age at death is described. TMA allows multiplexing of subsamples, permitting standardized comparative analysis of adjacent sections in 3-D and of representative cross-sections of a large number of specimens....

  4. Reconstructing the temporal ordering of biological samples using microarray data.

    Science.gov (United States)

    Magwene, Paul M; Lizardi, Paul; Kim, Junhyong

    2003-05-01

    Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.

  5. Biocompatible Hydrogels for Microarray Cell Printing and Encapsulation

    Directory of Open Access Journals (Sweden)

    Akshata Datar

    2015-10-01

    Full Text Available Conventional drug screening processes are a time-consuming and expensive endeavor, but highly rewarding when they are successful. To identify promising lead compounds, millions of compounds are traditionally screened against therapeutic targets on human cells grown on the surface of 96-wells. These two-dimensional (2D cell monolayers are physiologically irrelevant, thus, often providing false-positive or false-negative results, when compared to cells grown in three-dimensional (3D structures such as hydrogel droplets. However, 3D cell culture systems are not easily amenable to high-throughput screening (HTS, thus inherently low throughput, and requiring relatively large volume for cell-based assays. In addition, it is difficult to control cellular microenvironments and hard to obtain reliable cell images due to focus position and transparency issues. To overcome these problems, miniaturized 3D cell cultures in hydrogels were developed via cell printing techniques where cell spots in hydrogels can be arrayed on the surface of glass slides or plastic chips by microarray spotters and cultured in growth media to form cells encapsulated 3D droplets for various cell-based assays. These approaches can dramatically reduce assay volume, provide accurate control over cellular microenvironments, and allow us to obtain clear 3D cell images for high-content imaging (HCI. In this review, several hydrogels that are compatible to microarray printing robots are discussed for miniaturized 3D cell cultures.

  6. Classification of mislabelled microarrays using robust sparse logistic regression.

    Science.gov (United States)

    Bootkrajang, Jakramate; Kabán, Ata

    2013-04-01

    Previous studies reported that labelling errors are not uncommon in microarray datasets. In such cases, the training set may become misleading, and the ability of classifiers to make reliable inferences from the data is compromised. Yet, few methods are currently available in the bioinformatics literature to deal with this problem. The few existing methods focus on data cleansing alone, without reference to classification, and their performance crucially depends on some tuning parameters. In this article, we develop a new method to detect mislabelled arrays simultaneously with learning a sparse logistic regression classifier. Our method may be seen as a label-noise robust extension of the well-known and successful Bayesian logistic regression classifier. To account for possible mislabelling, we formulate a label-flipping process as part of the classifier. The regularization parameter is automatically set using Bayesian regularization, which not only saves the computation time that cross-validation would take, but also eliminates any unwanted effects of label noise when setting the regularization parameter. Extensive experiments with both synthetic data and real microarray datasets demonstrate that our approach is able to counter the bad effects of labelling errors in terms of predictive performance, it is effective at identifying marker genes and simultaneously it detects mislabelled arrays to high accuracy. The code is available from http://cs.bham.ac.uk/∼jxb008. Supplementary data are available at Bioinformatics online.

  7. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  8. Geiger mode avalanche photodiodes for microarray systems

    Science.gov (United States)

    Phelan, Don; Jackson, Carl; Redfern, R. Michael; Morrison, Alan P.; Mathewson, Alan

    2002-06-01

    New Geiger Mode Avalanche Photodiodes (GM-APD) have been designed and characterized specifically for use in microarray systems. Critical parameters such as excess reverse bias voltage, hold-off time and optimum operating temperature have been experimentally determined for these photon-counting devices. The photon detection probability, dark count rate and afterpulsing probability have been measured under different operating conditions. An active- quench circuit (AQC) is presented for operating these GM- APDs. This circuit is relatively simple, robust and has such benefits as reducing average power dissipation and afterpulsing. Arrays of these GM-APDs have already been designed and together with AQCs open up the possibility of having a solid-state microarray detector that enables parallel analysis on a single chip. Another advantage of these GM-APDs over current technology is their low voltage CMOS compatibility which could allow for the fabrication of an AQC on the same device. Small are detectors have already been employed in the time-resolved detection of fluorescence from labeled proteins. It is envisaged that operating these new GM-APDs with this active-quench circuit will have numerous applications for the detection of fluorescence in microarray systems.

  9. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    Directory of Open Access Journals (Sweden)

    Atiyeh Mortazavi

    2016-01-01

    Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  10. Cyclic approximation to stasis

    Directory of Open Access Journals (Sweden)

    Stewart D. Johnson

    2009-06-01

    Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.

  11. The relaxation time approximation

    International Nuclear Information System (INIS)

    Gairola, R.P.; Indu, B.D.

    1991-01-01

    A plausible approximation has been made to estimate the relaxation time from a knowledge of the transition probability of phonons from one state (r vector, q vector) to other state (r' vector, q' vector), as a result of collision. The relaxation time, thus obtained, shows a strong dependence on temperature and weak dependence on the wave vector. In view of this dependence, relaxation time has been expressed in terms of a temperature Taylor's series in the first Brillouin zone. Consequently, a simple model for estimating the thermal conductivity is suggested. the calculations become much easier than the Callaway model. (author). 14 refs

  12. Polynomial approximation on polytopes

    CERN Document Server

    Totik, Vilmos

    2014-01-01

    Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.

  13. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  14. Multi-task feature selection in microarray data by binary integer programming.

    Science.gov (United States)

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  15. Classification across gene expression microarray studies

    Directory of Open Access Journals (Sweden)

    Kuner Ruprecht

    2009-12-01

    Full Text Available Abstract Background The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive and histological grade (low/high of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM, predictive analysis of microarrays (PAM, random forest (RF and k-top scoring pairs (kTSP. Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results For each individual study the generalization error was benchmarked via complete cross-validation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In

  16. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  17. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  18. Evaluation of toxicity of the mycotoxin citrinin using yeast ORF DNA microarray and Oligo DNA microarray

    Directory of Open Access Journals (Sweden)

    Nobumasa Hitoshi

    2007-04-01

    Full Text Available Abstract Background Mycotoxins are fungal secondary metabolites commonly present in feed and food, and are widely regarded as hazardous contaminants. Citrinin, one of the very well known mycotoxins that was first isolated from Penicillium citrinum, is produced by more than 10 kinds of fungi, and is possibly spread all over the world. However, the information on the action mechanism of the toxin is limited. Thus, we investigated the citrinin-induced genomic response for evaluating its toxicity. Results Citrinin inhibited growth of yeast cells at a concentration higher than 100 ppm. We monitored the citrinin-induced mRNA expression profiles in yeast using the ORF DNA microarray and Oligo DNA microarray, and the expression profiles were compared with those of the other stress-inducing agents. Results obtained from both microarray experiments clustered together, but were different from those of the mycotoxin patulin. The oxidative stress response genes – AADs, FLR1, OYE3, GRE2, and MET17 – were significantly induced. In the functional category, expression of genes involved in "metabolism", "cell rescue, defense and virulence", and "energy" were significantly activated. In the category of "metabolism", genes involved in the glutathione synthesis pathway were activated, and in the category of "cell rescue, defense and virulence", the ABC transporter genes were induced. To alleviate the induced stress, these cells might pump out the citrinin after modification with glutathione. While, the citrinin treatment did not induce the genes involved in the DNA repair. Conclusion Results from both microarray studies suggest that citrinin treatment induced oxidative stress in yeast cells. The genotoxicity was less severe than the patulin, suggesting that citrinin is less toxic than patulin. The reproducibility of the expression profiles was much better with the Oligo DNA microarray. However, the Oligo DNA microarray did not completely overcome cross

  19. The random phase approximation

    International Nuclear Information System (INIS)

    Schuck, P.

    1985-01-01

    RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more

  20. Exploring the use of internal and externalcontrols for assessing microarray technical performance

    Directory of Open Access Journals (Sweden)

    Game Laurence

    2010-12-01

    Full Text Available Abstract Background The maturing of gene expression microarray technology and interest in the use of microarray-based applications for clinical and diagnostic applications calls for quantitative measures of quality. This manuscript presents a retrospective study characterizing several approaches to assess technical performance of microarray data measured on the Affymetrix GeneChip platform, including whole-array metrics and information from a standard mixture of external spike-in and endogenous internal controls. Spike-in controls were found to carry the same information about technical performance as whole-array metrics and endogenous "housekeeping" genes. These results support the use of spike-in controls as general tools for performance assessment across time, experimenters and array batches, suggesting that they have potential for comparison of microarray data generated across species using different technologies. Results A layered PCA modeling methodology that uses data from a number of classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes" was used for the assessment of microarray data quality. The controls provide information on multiple stages of the experimental protocol (e.g., hybridization, RNA amplification. External spike-in, hybridization and RNA labeling controls provide information related to both assay and hybridization performance whereas internal endogenous controls provide quality information on the biological sample. We find that the variance of the data generated from the external and internal controls carries critical information about technical performance; the PCA dissection of this variance is consistent with whole-array quality assessment based on a number of quality assurance/quality control (QA/QC metrics. Conclusions These results provide support for the use of both external and internal RNA control data to assess the technical quality of microarray

  1. FiGS: a filter-based gene selection workbench for microarray data

    Directory of Open Access Journals (Sweden)

    Yun Taegyun

    2010-01-01

    Full Text Available Abstract Background The selection of genes that discriminate disease classes from microarray data is widely used for the identification of diagnostic biomarkers. Although various gene selection methods are currently available and some of them have shown excellent performance, no single method can retain the best performance for all types of microarray datasets. It is desirable to use a comparative approach to find the best gene selection result after rigorous test of different methodological strategies for a given microarray dataset. Results FiGS is a web-based workbench that automatically compares various gene selection procedures and provides the optimal gene selection result for an input microarray dataset. FiGS builds up diverse gene selection procedures by aligning different feature selection techniques and classifiers. In addition to the highly reputed techniques, FiGS diversifies the gene selection procedures by incorporating gene clustering options in the feature selection step and different data pre-processing options in classifier training step. All candidate gene selection procedures are evaluated by the .632+ bootstrap errors and listed with their classification accuracies and selected gene sets. FiGS runs on parallelized computing nodes that capacitate heavy computations. FiGS is freely accessible at http://gexp.kaist.ac.kr/figs. Conclusion FiGS is an web-based application that automates an extensive search for the optimized gene selection analysis for a microarray dataset in a parallel computing environment. FiGS will provide both an efficient and comprehensive means of acquiring optimal gene sets that discriminate disease states from microarray datasets.

  2. Design, construction and validation of a Plasmodium vivax microarray for the transcriptome profiling of clinical isolates

    KAUST Repository

    Boopathi, Pon Arunachalam

    2016-10-09

    High density oligonucleotide microarrays have been used on Plasmodium vivax field isolates to estimate whole genome expression. However, no microarray platform has been experimentally optimized for studying the transcriptome of field isolates. In the present study, we adopted both bioinformatics and experimental testing approaches to select best optimized probes suitable for detecting parasite transcripts from field samples and included them in designing a custom 15K P. vivax microarray. This microarray has long oligonucleotide probes (60 mer) that were in-situ synthesized onto glass slides using Agilent SurePrint technology and has been developed into an 8X15K format (8 identical arrays on a single slide). Probes in this array were experimentally validated and represents 4180 P. vivax genes in sense orientation, of which 1219 genes have also probes in antisense orientation. Validation of the 15K array by using field samples (n =14) has shown 99% of parasite transcript detection from any of the samples. Correlation analysis between duplicate probes (n = 85) present in the arrays showed perfect correlation (r(2) = 0.98) indicating the reproducibility. Multiple probes representing the same gene exhibited similar kind of expression pattern across the samples (positive correlation, r >= 0.6). Comparison of hybridization data with the previous studies and quantitative real-time PCR experiments were performed to highlight the microarray validation procedure. This array is unique in its design, and results indicate that the array is sensitive and reproducible. Hence, this microarray could be a valuable functional genomics tool to generate reliable expression data from P. vivax field isolates. (C) 2016 Published by Elsevier B.V.

  3. Design, construction and validation of a Plasmodium vivax microarray for the transcriptome profiling of clinical isolates

    KAUST Repository

    Boopathi, Pon Arunachalam; Subudhi, Amit; Middha, Sheetal; Acharya, Jyoti; Mugasimangalam, Raja Chinnadurai; Kochar, Sanjay Kumar; Kochar, Dhanpat Kumar; Das, Ashis

    2016-01-01

    High density oligonucleotide microarrays have been used on Plasmodium vivax field isolates to estimate whole genome expression. However, no microarray platform has been experimentally optimized for studying the transcriptome of field isolates. In the present study, we adopted both bioinformatics and experimental testing approaches to select best optimized probes suitable for detecting parasite transcripts from field samples and included them in designing a custom 15K P. vivax microarray. This microarray has long oligonucleotide probes (60 mer) that were in-situ synthesized onto glass slides using Agilent SurePrint technology and has been developed into an 8X15K format (8 identical arrays on a single slide). Probes in this array were experimentally validated and represents 4180 P. vivax genes in sense orientation, of which 1219 genes have also probes in antisense orientation. Validation of the 15K array by using field samples (n =14) has shown 99% of parasite transcript detection from any of the samples. Correlation analysis between duplicate probes (n = 85) present in the arrays showed perfect correlation (r(2) = 0.98) indicating the reproducibility. Multiple probes representing the same gene exhibited similar kind of expression pattern across the samples (positive correlation, r >= 0.6). Comparison of hybridization data with the previous studies and quantitative real-time PCR experiments were performed to highlight the microarray validation procedure. This array is unique in its design, and results indicate that the array is sensitive and reproducible. Hence, this microarray could be a valuable functional genomics tool to generate reliable expression data from P. vivax field isolates. (C) 2016 Published by Elsevier B.V.

  4. Normalization for triple-target microarray experiments

    Directory of Open Access Journals (Sweden)

    Magniette Frederic

    2008-04-01

    Full Text Available Abstract Background Most microarray studies are made using labelling with one or two dyes which allows the hybridization of one or two samples on the same slide. In such experiments, the most frequently used dyes are Cy3 and Cy5. Recent improvements in the technology (dye-labelling, scanner and, image analysis allow hybridization up to four samples simultaneously. The two additional dyes are Alexa488 and Alexa494. The triple-target or four-target technology is very promising, since it allows more flexibility in the design of experiments, an increase in the statistical power when comparing gene expressions induced by different conditions and a scaled down number of slides. However, there have been few methods proposed for statistical analysis of such data. Moreover the lowess correction of the global dye effect is available for only two-color experiments, and even if its application can be derived, it does not allow simultaneous correction of the raw data. Results We propose a two-step normalization procedure for triple-target experiments. First the dye bleeding is evaluated and corrected if necessary. Then the signal in each channel is normalized using a generalized lowess procedure to correct a global dye bias. The normalization procedure is validated using triple-self experiments and by comparing the results of triple-target and two-color experiments. Although the focus is on triple-target microarrays, the proposed method can be used to normalize p differently labelled targets co-hybridized on a same array, for any value of p greater than 2. Conclusion The proposed normalization procedure is effective: the technical biases are reduced, the number of false positives is under control in the analysis of differentially expressed genes, and the triple-target experiments are more powerful than the corresponding two-color experiments. There is room for improving the microarray experiments by simultaneously hybridizing more than two samples.

  5. An algorithm for finding biologically significant features in microarray data based on a priori manifold learning.

    Directory of Open Access Journals (Sweden)

    Zena M Hira

    Full Text Available Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.

  6. Development of a Feature and Template-Assisted Assembler and Application to the Analysis of a Foot-and-Mouth Disease Virus Genotyping Microarray.

    Directory of Open Access Journals (Sweden)

    Roger W Barrette

    Full Text Available Several RT-PCR and genome sequencing strategies exist for the resolution of Foot-and-Mouth Disease virus (FMDV. While these approaches are relatively straightforward, they can be vulnerable to failure due to the unpredictable nature of FMDV genome sequence variations. Sequence independent single primer amplification (SISPA followed by genotyping microarray offers an attractive unbiased approach to FMDV characterization. Here we describe a custom FMDV microarray and a companion feature and template-assisted assembler software (FAT-assembler capable of resolving virus genome sequence using a moderate number of conserved microarray features. The results demonstrate that this approach may be used to rapidly characterize naturally occurring FMDV as well as an engineered chimeric strain of FMDV. The FAT-assembler, while applied to resolving FMDV genomes, represents a new bioinformatics approach that should be broadly applicable to interpreting microarray genotyping data for other viruses or target organisms.

  7. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  8. Design of a covalently bonded glycosphingolipid microarray

    DEFF Research Database (Denmark)

    Arigi, Emma; Blixt, Klas Ola; Buschard, Karsten

    2012-01-01

    , the major classes of plant and fungal GSLs. In this work, a prototype "universal" GSL-based covalent microarray has been designed, and preliminary evaluation of its potential utility in assaying protein-GSL binding interactions investigated. An essential step in development involved the enzymatic release...... of the fatty acyl moiety of the ceramide aglycone of selected mammalian GSLs with sphingolipid N-deacylase (SCDase). Derivatization of the free amino group of a typical lyso-GSL, lyso-G(M1), with a prototype linker assembled from succinimidyl-[(N-maleimidopropionamido)-diethyleneglycol] ester and 2...

  9. Linking probe thermodynamics to microarray quantification

    International Nuclear Information System (INIS)

    Li, Shuzhao; Pozhitkov, Alexander; Brouwer, Marius

    2010-01-01

    Understanding the difference in probe properties holds the key to absolute quantification of DNA microarrays. So far, Langmuir-like models have failed to link sequence-specific properties to hybridization signals in the presence of a complex hybridization background. Data from washing experiments indicate that the post-hybridization washing has no major effect on the specifically bound targets, which give the final signals. Thus, the amount of specific targets bound to probes is likely determined before washing, by the competition against nonspecific binding. Our competitive hybridization model is a viable alternative to Langmuir-like models. (comment)

  10. Approximate quantum Markov chains

    CERN Document Server

    Sutter, David

    2018-01-01

    This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...

  11. Design of an Enterobacteriaceae Pan-genome Microarray Chip

    DEFF Research Database (Denmark)

    Lukjancenko, Oksana; Ussery, David

    2010-01-01

    -density microarray chip has been designed, using 116 Enterobacteriaceae genome sequences, taking into account the enteric pan-genome. Probes for the microarray were checked in silico and performance of the chip, based on experimental strains from four different genera, demonstrate a relatively high ability...... to distinguish those strains on genus, species, and pathotype/serovar levels. Additionally, the microarray performed well when investigating which genes were found in a given strain of interest. The Enterobacteriaceae pan-genome microarray, based on 116 genomes, provides a valuable tool for determination...

  12. Extracting gene expression patterns and identifying co-expressed genes from microarray data reveals biologically responsive processes

    Directory of Open Access Journals (Sweden)

    Paules Richard S

    2007-11-01

    Full Text Available Abstract Background A common observation in the analysis of gene expression data is that many genes display similarity in their expression patterns and therefore appear to be co-regulated. However, the variation associated with microarray data and the complexity of the experimental designs make the acquisition of co-expressed genes a challenge. We developed a novel method for Extracting microarray gene expression Patterns and Identifying co-expressed Genes, designated as EPIG. The approach utilizes the underlying structure of gene expression data to extract patterns and identify co-expressed genes that are responsive to experimental conditions. Results Through evaluation of the correlations among profiles, the magnitude of variation in gene expression profiles, and profile signal-to-noise ratio's, EPIG extracts a set of patterns representing co-expressed genes. The method is shown to work well with a simulated data set and microarray data obtained from time-series studies of dauer recovery and L1 starvation in C. elegans and after ultraviolet (UV or ionizing radiation (IR-induced DNA damage in diploid human fibroblasts. With the simulated data set, EPIG extracted the appropriate number of patterns which were more stable and homogeneous than the set of patterns that were determined using the CLICK or CAST clustering algorithms. However, CLICK performed better than EPIG and CAST with respect to the average correlation between clusters/patterns of the simulated data. With real biological data, EPIG extracted more dauer-specific patterns than CLICK. Furthermore, analysis of the IR/UV data revealed 18 unique patterns and 2661 genes out of approximately 17,000 that were identified as significantly expressed and categorized to the patterns by EPIG. The time-dependent patterns displayed similar and dissimilar responses between IR and UV treatments. Gene Ontology analysis applied to each pattern-related subset of co-expressed genes revealed underlying

  13. Quasi-homogenous approximation for description of the properties of dispersed systems. The basic approaches to model hardening processes in nanodispersed silica systems. Part 3. Penetration of energy barriers

    Directory of Open Access Journals (Sweden)

    KUDRYAVTSEV Pavel Gennadievich

    2015-06-01

    Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.

  14. Quasi-homogenous approximation for description of the properties of dispersed systems. The basic approaches to model hardening processes in nanodispersed silica systems. Part 2. The hardening processes from the standpoint of statistical physics

    Directory of Open Access Journals (Sweden)

    KUDRYAVTSEV Pavel Gennadievich

    2015-04-01

    Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer ethod based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.

  15. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  16. Evaluation of variational approximations

    International Nuclear Information System (INIS)

    Trevisan, L.A.

    1991-01-01

    In Feynman's approach to quantum statistical mechanics, the partition function can e represented as a path integral. A recently proposed variation method of Feynman-Kleinert is able to transform the path integral into an integral in phase space, in which the quantum fluctuations have been taken care of by introducing the effective classical potential. This method has been testes with succeed for the smooth potentials and for the singular potential of delta. The method to the strong singular potentials is applied: a quadratic potential and a linear potential both with a rigid wall at the origin. By satisfying the condition that the density of the particle be vanish at the origin, and adapted method of Feynman-Kleinert in order to improve the method is introduced. (author)

  17. Impulse approximation in solid helium

    International Nuclear Information System (INIS)

    Glyde, H.R.

    1985-01-01

    The incoherent dynamic form factor S/sub i/(Q, ω) is evaluated in solid helium for comparison with the impulse approximation (IA). The purpose is to determine the Q values for which the IA is valid for systems such a helium where the atoms interact via a potential having a steeply repulsive but not infinite hard core. For 3 He, S/sub i/(Q, ω) is evaluated from first principles, beginning with the pair potential. The density of states g(ω) is evaluated using the self-consistent phonon theory and S/sub i/(Q,ω) is expressed in terms of g(ω). For solid 4 He resonable models of g(ω) using observed input parameters are used to evaluate S/sub i/(Q,ω). In both cases S/sub i/(Q, ω) is found to approach the impulse approximation S/sub IA/(Q, ω) closely for wave vector transfers Q> or approx. =20 A -1 . The difference between S/sub i/ and S/sub IA/, which is due to final state interactions of the scattering atom with the remainder of the atoms in the solid, is also predominantly antisymmetric in (ω-ω/sub R/), where ω/sub R/ is the recoil frequency. This suggests that the symmetrization procedure proposed by Sears to eliminate final state contributions should work well in solid helium

  18. A statistical framework for differential network analysis from microarray data

    Directory of Open Access Journals (Sweden)

    Datta Somnath

    2010-02-01

    Full Text Available Abstract Background It has been long well known that genes do not act alone; rather groups of genes act in consort during a biological process. Consequently, the expression levels of genes are dependent on each other. Experimental techniques to detect such interacting pairs of genes have been in place for quite some time. With the advent of microarray technology, newer computational techniques to detect such interaction or association between gene expressions are being proposed which lead to an association network. While most microarray analyses look for genes that are differentially expressed, it is of potentially greater significance to identify how entire association network structures change between two or more biological settings, say normal versus diseased cell types. Results We provide a recipe for conducting a differential analysis of networks constructed from microarray data under two experimental settings. At the core of our approach lies a connectivity score that represents the strength of genetic association or interaction between two genes. We use this score to propose formal statistical tests for each of following queries: (i whether the overall modular structures of the two networks are different, (ii whether the connectivity of a particular set of "interesting genes" has changed between the two networks, and (iii whether the connectivity of a given single gene has changed between the two networks. A number of examples of this score is provided. We carried out our method on two types of simulated data: Gaussian networks and networks based on differential equations. We show that, for appropriate choices of the connectivity scores and tuning parameters, our method works well on simulated data. We also analyze a real data set involving normal versus heavy mice and identify an interesting set of genes that may play key roles in obesity. Conclusions Examining changes in network structure can provide valuable information about the

  19. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  20. Transcriptome analysis in non-model species: a new method for the analysis of heterologous hybridization on microarrays

    Directory of Open Access Journals (Sweden)

    Jouventin Pierre

    2010-05-01

    Full Text Available Abstract Background Recent developments in high-throughput methods of analyzing transcriptomic profiles are promising for many areas of biology, including ecophysiology. However, although commercial microarrays are available for most common laboratory models, transcriptome analysis in non-traditional model species still remains a challenge. Indeed, the signal resulting from heterologous hybridization is low and difficult to interpret because of the weak complementarity between probe and target sequences, especially when no microarray dedicated to a genetically close species is available. Results We show here that transcriptome analysis in a species genetically distant from laboratory models is made possible by using MAXRS, a new method of analyzing heterologous hybridization on microarrays. This method takes advantage of the design of several commercial microarrays, with different probes targeting the same transcript. To illustrate and test this method, we analyzed the transcriptome of king penguin pectoralis muscle hybridized to Affymetrix chicken microarrays, two organisms separated by an evolutionary distance of approximately 100 million years. The differential gene expression observed between different physiological situations computed by MAXRS was confirmed by real-time PCR on 10 genes out of 11 tested. Conclusions MAXRS appears to be an appropriate method for gene expression analysis under heterologous hybridization conditions.

  1. Improving the scaling normalization for high-density oligonucleotide GeneChip expression microarrays

    Directory of Open Access Journals (Sweden)

    Lu Chao

    2004-07-01

    Full Text Available Abstract Background Normalization is an important step for microarray data analysis to minimize biological and technical variations. Choosing a suitable approach can be critical. The default method in GeneChip expression microarray uses a constant factor, the scaling factor (SF, for every gene on an array. The SF is obtained from a trimmed average signal of the array after excluding the 2% of the probe sets with the highest and the lowest values. Results Among the 76 U34A GeneChip experiments, the total signals on each array showed 25.8% variations in terms of the coefficient of variation, although all microarrays were hybridized with the same amount of biotin-labeled cRNA. The 2% of the probe sets with the highest signals that were normally excluded from SF calculation accounted for 34% to 54% of the total signals (40.7% ± 4.4%, mean ± sd. In comparison with normalization factors obtained from the median signal or from the mean of the log transformed signal, SF showed the greatest variation. The normalization factors obtained from log transformed signals showed least variation. Conclusions Eliminating 40% of the signal data during SF calculation failed to show any benefit. Normalization factors obtained with log transformed signals performed the best. Thus, it is suggested to use the mean of the logarithm transformed data for normalization, rather than the arithmetic mean of signals in GeneChip gene expression microarrays.

  2. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships.

    Science.gov (United States)

    Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong

    2010-01-18

    The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.

  3. Microarray assessment of virulence, antibiotic, and heavy metal resistance in an agricultural watershed creek.

    Science.gov (United States)

    Unc, Adrian; Zurek, Ludek; Peterson, Greg; Narayanan, Sanjeev; Springthorpe, Susan V; Sattar, Syed A

    2012-01-01

    Potential risks associated with impaired surface water quality have commonly been evaluated by indirect description of potential sources using various fecal microbial indicators and derived source-tracking methods. These approaches are valuable for assessing and monitoring the impacts of land-use changes and changes in management practices at the source of contamination. A more detailed evaluation of putative etiologically significant genetic determinants can add value to these assessments. We evaluated the utility of using a microarray that integrates virulence genes with antibiotic and heavy metal resistance genes to describe and discriminate among spatially and seasonally distinct water samples from an agricultural watershed creek in Eastern Ontario. Because microarray signals may be analyzed as binomial distributions, the significance of ambiguous signals can be easily evaluated by using available off-the-shelf software. The FAMD software was used to evaluate uncertainties in the signal data. Analysis of multilocus fingerprinting data sets containing missing data has shown that, for the tested system, any variability in microarray signals had a marginal effect on data interpretation. For the tested watershed, results suggest that in general the wet fall season increased the downstream detection of virulence and resistance genes. Thus, the tested microarray technique has the potential to rapidly describe the quality of surface waters and thus to provide a qualitative tool to augment quantitative microbial risk assessments. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. NMD Microarray Analysis for Rapid Genome-Wide Screen of Mutated Genes in Cancer

    Directory of Open Access Journals (Sweden)

    Maija Wolf

    2005-01-01

    Full Text Available Gene mutations play a critical role in cancer development and progression, and their identification offers possibilities for accurate diagnostics and therapeutic targeting. Finding genes undergoing mutations is challenging and slow, even in the post-genomic era. A new approach was recently developed by Noensie and Dietz to prioritize and focus the search, making use of nonsense-mediated mRNA decay (NMD inhibition and microarray analysis (NMD microarrays in the identification of transcripts containing nonsense mutations. We combined NMD microarrays with array-based CGH (comparative genomic hybridization in order to identify inactivation of tumor suppressor genes in cancer. Such a “mutatomics” screening of prostate cancer cell lines led to the identification of inactivating mutations in the EPHB2 gene. Up to 8% of metastatic uncultured prostate cancers also showed mutations of this gene whose loss of function may confer loss of tissue architecture. NMD microarray analysis could turn out to be a powerful research method to identify novel mutated genes in cancer cell lines, providing targets that could then be further investigated for their clinical relevance and therapeutic potential.

  5. Ontology-based, Tissue MicroArray oriented, image centered tissue bank

    Directory of Open Access Journals (Sweden)

    Viti Federica

    2008-04-01

    Full Text Available Abstract Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes.

  6. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  7. Consistent Differential Expression Pattern (CDEP) on microarray to identify genes related to metastatic behavior.

    Science.gov (United States)

    Tsoi, Lam C; Qin, Tingting; Slate, Elizabeth H; Zheng, W Jim

    2011-11-11

    To utilize the large volume of gene expression information generated from different microarray experiments, several meta-analysis techniques have been developed. Despite these efforts, there remain significant challenges to effectively increasing the statistical power and decreasing the Type I error rate while pooling the heterogeneous datasets from public resources. The objective of this study is to develop a novel meta-analysis approach, Consistent Differential Expression Pattern (CDEP), to identify genes with common differential expression patterns across different datasets. We combined False Discovery Rate (FDR) estimation and the non-parametric RankProd approach to estimate the Type I error rate in each microarray dataset of the meta-analysis. These Type I error rates from all datasets were then used to identify genes with common differential expression patterns. Our simulation study showed that CDEP achieved higher statistical power and maintained low Type I error rate when compared with two recently proposed meta-analysis approaches. We applied CDEP to analyze microarray data from different laboratories that compared transcription profiles between metastatic and primary cancer of different types. Many genes identified as differentially expressed consistently across different cancer types are in pathways related to metastatic behavior, such as ECM-receptor interaction, focal adhesion, and blood vessel development. We also identified novel genes such as AMIGO2, Gem, and CXCL11 that have not been shown to associate with, but may play roles in, metastasis. CDEP is a flexible approach that borrows information from each dataset in a meta-analysis in order to identify genes being differentially expressed consistently. We have shown that CDEP can gain higher statistical power than other existing approaches under a variety of settings considered in the simulation study, suggesting its robustness and insensitivity to data variation commonly associated with microarray

  8. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  9. Conditional Density Approximations with Mixtures of Polynomials

    DEFF Research Database (Denmark)

    Varando, Gherardo; López-Cruz, Pedro L.; Nielsen, Thomas Dyhre

    2015-01-01

    Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce...... two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities...

  10. Shared probe design and existing microarray reanalysis using PICKY

    Directory of Open Access Journals (Sweden)

    Chou Hui-Hsien

    2010-04-01

    Full Text Available Abstract Background Large genomes contain families of highly similar genes that cannot be individually identified by microarray probes. This limitation is due to thermodynamic restrictions and cannot be resolved by any computational method. Since gene annotations are updated more frequently than microarrays, another common issue facing microarray users is that existing microarrays must be routinely reanalyzed to determine probes that are still useful with respect to the updated annotations. Results PICKY 2.0 can design shared probes for sets of genes that cannot be individually identified using unique probes. PICKY 2.0 uses novel algorithms to track sharable regions among genes and to strictly distinguish them from other highly similar but nontarget regions during thermodynamic comparisons. Therefore, PICKY does not sacrifice the quality of shared probes when choosing them. The latest PICKY 2.1 includes the new capability to reanalyze existing microarray probes against updated gene sets to determine probes that are still valid to use. In addition, more precise nonlinear salt effect estimates and other improvements are added, making PICKY 2.1 more versatile to microarray users. Conclusions Shared probes allow expressed gene family members to be detected; this capability is generally more desirable than not knowing anything about these genes. Shared probes also enable the design of cross-genome microarrays, which facilitate multiple species identification in environmental samples. The new nonlinear salt effect calculation significantly increases the precision of probes at a lower buffer salt concentration, and the probe reanalysis function improves existing microarray result interpretations.

  11. A Critical Perspective On Microarray Breast Cancer Gene Expression Profiling

    NARCIS (Netherlands)

    Sontrop, H.M.J.

    2015-01-01

    Microarrays offer biologists an exciting tool that allows the simultaneous assessment of gene expression levels for thousands of genes at once. At the time of their inception, microarrays were hailed as the new dawn in cancer biology and oncology practice with the hope that within a decade diseases

  12. The Importance of Normalization on Large and Heterogeneous Microarray Datasets

    Science.gov (United States)

    DNA microarray technology is a powerful functional genomics tool increasingly used for investigating global gene expression in environmental studies. Microarrays can also be used in identifying biological networks, as they give insight on the complex gene-to-gene interactions, ne...

  13. The application of DNA microarrays in gene expression analysis

    NARCIS (Netherlands)

    Hal, van N.L.W.; Vorst, O.; Houwelingen, van A.M.M.L.; Kok, E.J.; Peijnenburg, A.A.C.M.; Aharoni, A.; Tunen, van A.J.; Keijer, J.

    2000-01-01

    DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed.

  14. Uses of Dendrimers for DNA Microarrays

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Majoral

    2006-08-01

    Full Text Available Biosensors such as DNA microarrays and microchips are gaining an increasingimportance in medicinal, forensic, and environmental analyses. Such devices are based onthe detection of supramolecular interactions called hybridizations that occur betweencomplementary oligonucleotides, one linked to a solid surface (the probe, and the other oneto be analyzed (the target. This paper focuses on the improvements that hyperbranched andperfectly defined nanomolecules called dendrimers can provide to this methodology. Twomain uses of dendrimers for such purpose have been described up to now; either thedendrimer is used as linker between the solid surface and the probe oligonucleotide, or thedendrimer is used as a multilabeled entity linked to the target oligonucleotide. In the firstcase the dendrimer generally induces a higher loading of probes and an easier hybridization,due to moving away the solid phase. In the second case the high number of localized labels(generally fluorescent induces an increased sensitivity, allowing the detection of smallquantities of biological entities.

  15. Application of four dyes in gene expression analyses by microarrays

    Directory of Open Access Journals (Sweden)

    van Schooten Frederik J

    2005-07-01

    Full Text Available Abstract Background DNA microarrays are widely used in gene expression analyses. To increase throughput and minimize costs without reducing gene expression data obtained, we investigated whether four mRNA samples can be analyzed simultaneously by applying four different fluorescent dyes. Results Following tests for cross-talk of fluorescence signals, Alexa 488, Alexa 594, Cyanine 3 and Cyanine 5 were selected for hybridizations. For self-hybridizations, a single RNA sample was labelled with all dyes and hybridized on commercial cDNA arrays or on in-house spotted oligonucleotide arrays. Correlation coefficients for all combinations of dyes were above 0.9 on the cDNA array. On the oligonucleotide array they were above 0.8, except combinations with Alexa 488, which were approximately 0.5. Standard deviation of expression differences for replicate spots were similar on the cDNA array for all dye combinations, but on the oligonucleotide array combinations with Alexa 488 showed a higher variation. Conclusion In conclusion, the four dyes can be used simultaneously for gene expression experiments on the tested cDNA array, but only three dyes can be used on the tested oligonucleotide array. This was confirmed by hybridizations of control with test samples, as all combinations returned similar numbers of differentially expressed genes with comparable effects on gene expression.

  16. Bystander effect: Biological endpoints and microarray analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhry, M. Ahmad [Department of Medical Laboratory and Radiation Sciences, College of Nursing and Health Sciences, University of Vermont, 302 Rowell Building, Burlington, VT 05405 (United States) and DNA Microarray Facility, University of Vermont, Burlington, VT 05405 (United States)]. E-mail: mchaudhr@uvm.edu

    2006-05-11

    In cell populations exposed to ionizing radiation, the biological effects occur in a much larger proportion of cells than are estimated to be traversed by radiation. It has been suggested that irradiated cells are capable of providing signals to the neighboring unirradiated cells resulting in damage to these cells. This phenomenon is termed the bystander effect. The bystander effect induces persistent, long-term, transmissible changes that result in delayed death and neoplastic transformation. Because the bystander effect is relevant to carcinogenesis, it could have significant implications for risk estimation for radiation exposure. The nature of the bystander effect signal and how it impacts the unirradiated cells remains to be elucidated. Examination of the changes in gene expression could provide clues to understanding the bystander effect and could define the signaling pathways involved in sustaining damage to these cells. The microarray technology serves as a tool to gain insight into the molecular pathways leading to bystander effect. Using medium from irradiated normal human diploid lung fibroblasts as a model system we examined gene expression alterations in bystander cells. The microarray data revealed that the radiation-induced gene expression profile in irradiated cells is different from unirradiated bystander cells suggesting that the pathways leading to biological effects in the bystander cells are different from the directly irradiated cells. The genes known to be responsive to ionizing radiation were observed in irradiated cells. Several genes were upregulated in cells receiving media from irradiated cells. Surprisingly no genes were found to be downregulated in these cells. A number of genes belonging to extracellular signaling, growth factors and several receptors were identified in bystander cells. Interestingly 15 genes involved in the cell communication processes were found to be upregulated. The induction of receptors and the cell

  17. Bystander effect: Biological endpoints and microarray analysis

    International Nuclear Information System (INIS)

    Chaudhry, M. Ahmad

    2006-01-01

    In cell populations exposed to ionizing radiation, the biological effects occur in a much larger proportion of cells than are estimated to be traversed by radiation. It has been suggested that irradiated cells are capable of providing signals to the neighboring unirradiated cells resulting in damage to these cells. This phenomenon is termed the bystander effect. The bystander effect induces persistent, long-term, transmissible changes that result in delayed death and neoplastic transformation. Because the bystander effect is relevant to carcinogenesis, it could have significant implications for risk estimation for radiation exposure. The nature of the bystander effect signal and how it impacts the unirradiated cells remains to be elucidated. Examination of the changes in gene expression could provide clues to understanding the bystander effect and could define the signaling pathways involved in sustaining damage to these cells. The microarray technology serves as a tool to gain insight into the molecular pathways leading to bystander effect. Using medium from irradiated normal human diploid lung fibroblasts as a model system we examined gene expression alterations in bystander cells. The microarray data revealed that the radiation-induced gene expression profile in irradiated cells is different from unirradiated bystander cells suggesting that the pathways leading to biological effects in the bystander cells are different from the directly irradiated cells. The genes known to be responsive to ionizing radiation were observed in irradiated cells. Several genes were upregulated in cells receiving media from irradiated cells. Surprisingly no genes were found to be downregulated in these cells. A number of genes belonging to extracellular signaling, growth factors and several receptors were identified in bystander cells. Interestingly 15 genes involved in the cell communication processes were found to be upregulated. The induction of receptors and the cell

  18. Lipid Microarray Biosensor for Biotoxin Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Anup K.; Throckmorton, Daniel J.; Moran-Mirabal, Jose C.; Edel, Joshua B.; Meyer, Grant D.; Craighead, Harold G.

    2006-05-01

    We present the use of micron-sized lipid domains, patterned onto planar substrates and within microfluidic channels, to assay the binding of bacterial toxins via total internal reflection fluorescence microscopy (TIRFM). The lipid domains were patterned using a polymer lift-off technique and consisted of ganglioside-populated DSPC:cholesterol supported lipid bilayers (SLBs). Lipid patterns were formed on the substrates by vesicle fusion followed by polymer lift-off, which revealed micron-sized SLBs containing either ganglioside GT1b or GM1. The ganglioside-populated SLB arrays were then exposed to either Cholera toxin subunit B (CTB) or Tetanus toxin fragment C (TTC). Binding was assayed on planar substrates by TIRFM down to 1 nM concentration for CTB and 100 nM for TTC. Apparent binding constants extracted from three different models applied to the binding curves suggest that binding of a protein to a lipid-based receptor is strongly affected by the lipid composition of the SLB and by the substrate on which the bilayer is formed. Patterning of SLBs inside microfluidic channels also allowed the preparation of lipid domains with different compositions on a single device. Arrays within microfluidic channels were used to achieve segregation and selective binding from a binary mixture of the toxin fragments in one device. The binding and segregation within the microfluidic channels was assayed with epifluorescence as proof of concept. We propose that the method used for patterning the lipid microarrays on planar substrates and within microfluidic channels can be easily adapted to proteins or nucleic acids and can be used for biosensor applications and cell stimulation assays under different flow conditions. KEYWORDS. Microarray, ganglioside, polymer lift-off, cholera toxin, tetanus toxin, TIRFM, binding constant.4

  19. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  20. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  1. Versatile High Resolution Oligosaccharide Microarrays for Plant Glycobiology and Cell Wall Research

    DEFF Research Database (Denmark)

    Pedersen, Henriette Lodberg; Fangel, Jonatan Ulrik; McCleary, Barry

    2012-01-01

    Microarrays are powerful tools for high throughput analysis, and hundreds or thousands of molecular interactions can be assessed simultaneously using very small amounts of analytes. Nucleotide microarrays are well established in plant research, but carbohydrate microarrays are much less establish...

  2. Functional Characterization of Gibberellin-Regulated Genes in Rice Using Microarray System

    OpenAIRE

    Jan, Asad; Komatsu, Setsuko

    2006-01-01

    Gibberellin (GA) is collectively referred to a group of diterpenoid acids, some of which act as plant hormones and are essential for normal plant growth and development. DNA microarray technology has become the standard tool for the parallel quantification of large numbers of messenger RNA transcripts. The power of this approach has been demonstrated in dissecting plant physiology and development, and in unraveling the underlying cellular signaling pathways. To understand the molecular mechan...

  3. Transcriptional profiling of endocrine cerebro-osteodysplasia using microarray and next-generation sequencing.

    Directory of Open Access Journals (Sweden)

    Piya Lahiry

    Full Text Available BACKGROUND: Transcriptome profiling of patterns of RNA expression is a powerful approach to identify networks of genes that play a role in disease. To date, most mRNA profiling of tissues has been accomplished using microarrays, but next-generation sequencing can offer a richer and more comprehensive picture. METHODOLOGY/PRINCIPAL FINDINGS: ECO is a rare multi-system developmental disorder caused by a homozygous mutation in ICK encoding intestinal cell kinase. We performed gene expression profiling using both cDNA microarrays and next-generation mRNA sequencing (mRNA-seq of skin fibroblasts from ECO-affected subjects. We then validated a subset of differentially expressed transcripts identified by each method using quantitative reverse transcription-polymerase chain reaction (qRT-PCR. Finally, we used gene ontology (GO to identify critical pathways and processes that were abnormal according to each technical platform. Methodologically, mRNA-seq identifies a much larger number of differentially expressed genes with much better correlation to qRT-PCR results than the microarray (r² = 0.794 and 0.137, respectively. Biologically, cDNA microarray identified functional pathways focused on anatomical structure and development, while the mRNA-seq platform identified a higher proportion of genes involved in cell division and DNA replication pathways. CONCLUSIONS/SIGNIFICANCE: Transcriptome profiling with mRNA-seq had greater sensitivity, range and accuracy than the microarray. The two platforms generated different but complementary hypotheses for further evaluation.

  4. Evaluation of gene importance in microarray data based upon probability of selection

    Directory of Open Access Journals (Sweden)

    Fu Li M

    2005-03-01

    Full Text Available Abstract Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities.

  5. Identification of listeria species isolated in Tunisia by Microarray based assay : results of a preliminary study

    International Nuclear Information System (INIS)

    Hmaied, Fatma; Helel, Salma; Barkallah, Insaf; Leberre, V.; Francois, J.M.; Kechrid, A.

    2008-01-01

    Microarray-based assay is a new molecular approach for genetic screening and identification of microorganisms. We have developed a rapid microarray-based assay for the reliable detection and discrimination of Listeria spp. in food and clinical isolates from Tunisia. The method used in the present study is based on the PCR amplification of a virulence factor gene (iap gene). the PCR mixture contained cyanine Cy5labeled dCTP. Therefore, The PCR products were fluorescently labeled. The presence of multiple species-specific sequences within the iap gene enabled us to design different oligoprobes per species. The species-specific sequences of the iap gene used in this study were obtained from genBank and then aligned for phylogenetic analysis in order to identify and retrieve the sequences of homologues of the amplified iap gene analysed. 20 probes were used for detection and identification of 22 food isolates and clinical isolates of Listeria spp (L. monocytogenes, L. ivanovi), L. welshimeri, L. seeligeri, and L. grayi). Each bacterial gene was identified by hybridization to oligoprobes specific for each Listeria species and immobilized on a glass surface. The microarray analysis showed that 5 clinical isolates and 2 food isolates were identified listeria monocytogenes. Concerning the remaining 15 food isolates; 13 were identified listeria innocua and 2 isolates could not be identified by microarray based assay. Further phylogenetic and molecular analysis are required to design more species-specific probes for the identification of Listeria spp. Microarray-based assay is a simple and rapid method used for Listeria species discrimination

  6. A cell spot microarray method for production of high density siRNA transfection microarrays

    Directory of Open Access Journals (Sweden)

    Mpindi John-Patrick

    2011-03-01

    Full Text Available Abstract Background High-throughput RNAi screening is widely applied in biological research, but remains expensive, infrastructure-intensive and conversion of many assays to HTS applications in microplate format is not feasible. Results Here, we describe the optimization of a miniaturized cell spot microarray (CSMA method, which facilitates utilization of the transfection microarray technique for disparate RNAi analyses. To promote rapid adaptation of the method, the concept has been tested with a panel of 92 adherent cell types, including primary human cells. We demonstrate the method in the systematic screening of 492 GPCR coding genes for impact on growth and survival of cultured human prostate cancer cells. Conclusions The CSMA method facilitates reproducible preparation of highly parallel cell microarrays for large-scale gene knockdown analyses. This will be critical towards expanding the cell based functional genetic screens to include more RNAi constructs, allow combinatorial RNAi analyses, multi-parametric phenotypic readouts or comparative analysis of many different cell types.

  7. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  8. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  9. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  10. SCK-CEN Genomic Platform: the microarray technology

    International Nuclear Information System (INIS)

    Benotmane, R.

    2006-01-01

    The human body contains approximately 10 14 cells, wherein each one is a nucleus. The nucleus contains 2x23 chromosomes, or two complete sets of the human genome, one set coming from the mother and the other from the father. In principle each set includes 30.000-40.000 genes. If the genome was a book, it would be twenty-three chapters, called chromosomes,each chapter with several thousand stories, called genes. Each story made up of paragraphs, called exons and introns. Each paragraph made up of 3 letter words, called codons. Each word is written with letters called bases (AGCT). But the whole is written in a single very long sentence, which is the DNA molecule or deoxy nucleic acid. The usual state of DNA is two complementary strands intertwined forming a double helix. In the cell, DNA is duplicated during each cell division to ensure the transmission of the genome to the daughter cells. For expression, the DNA is transcribed to messenger RNA. The RNA is edited and finally translated to a protein, each three bases coding for one amino acid. When the whole message is translated, the chain of amino acids folds itself up into a distinctive shape that depends on its sequence. Proteins are the effectors of the genes, and are responsible for all metabolic, hormonal and enzymatic reactions in the cells. The expressed RNA determines the amount of proteins to be produced and subsequently the desired effect (strong or weak) in the cell. The microarray technology aims at quantifying the amount of RNA present in the cell from each expressed gene, and at evaluating the changes of these amounts after exposure of the cell to toxic chemicals, ionising radiation or other stress components. The global picture of expressed genes helps to understand the affected genetic pathways in the cell at the molecular level. The microarray technology is used in the Radiobiology and Microbiology topics to study the effect of ionising radiation on human cells and mouse tissue, as well as the

  11. Detecting variants with Metabolic Design, a new software tool to design probes for explorative functional DNA microarray development

    Directory of Open Access Journals (Sweden)

    Gravelat Fabrice

    2010-09-01

    Full Text Available Abstract Background Microorganisms display vast diversity, and each one has its own set of genes, cell components and metabolic reactions. To assess their huge unexploited metabolic potential in different ecosystems, we need high throughput tools, such as functional microarrays, that allow the simultaneous analysis of thousands of genes. However, most classical functional microarrays use specific probes that monitor only known sequences, and so fail to cover the full microbial gene diversity present in complex environments. We have thus developed an algorithm, implemented in the user-friendly program Metabolic Design, to design efficient explorative probes. Results First we have validated our approach by studying eight enzymes involved in the degradation of polycyclic aromatic hydrocarbons from the model strain Sphingomonas paucimobilis sp. EPA505 using a designed microarray of 8,048 probes. As expected, microarray assays identified the targeted set of genes induced during biodegradation kinetics experiments with various pollutants. We have then confirmed the identity of these new genes by sequencing, and corroborated the quantitative discrimination of our microarray by quantitative real-time PCR. Finally, we have assessed metabolic capacities of microbial communities in soil contaminated with aromatic hydrocarbons. Results show that our probe design (sensitivity and explorative quality can be used to study a complex environment efficiently. Conclusions We successfully use our microarray to detect gene expression encoding enzymes involved in polycyclic aromatic hydrocarbon degradation for the model strain. In addition, DNA microarray experiments performed on soil polluted by organic pollutants without prior sequence assumptions demonstrate high specificity and sensitivity for gene detection. Metabolic Design is thus a powerful, efficient tool that can be used to design explorative probes and monitor metabolic pathways in complex environments

  12. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  13. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  14. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

    KAUST Repository

    Sturino, Joseph

    2010-01-24

    We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.

  15. New insights about host response to smallpox using microarray data

    Directory of Open Access Journals (Sweden)

    Dias Rodrigo A

    2007-08-01

    Full Text Available Abstract Background Smallpox is a lethal disease that was endemic in many parts of the world until eradicated by massive immunization. Due to its lethality, there are serious concerns about its use as a bioweapon. Here we analyze publicly available microarray data to further understand survival of smallpox infected macaques, using systems biology approaches. Our goal is to improve the knowledge about the progression of this disease. Results We used KEGG pathways annotations to define groups of genes (or modules, and subsequently compared them to macaque survival times. This technique provided additional insights about the host response to this disease, such as increased expression of the cytokines and ECM receptors in the individuals with higher survival times. These results could indicate that these gene groups could influence an effective response from the host to smallpox. Conclusion Macaques with higher survival times clearly express some specific pathways previously unidentified using regular gene-by-gene approaches. Our work also shows how third party analysis of public datasets can be important to support new hypotheses to relevant biological problems.

  16. Microarray background correction: maximum likelihood estimation for the normal-exponential convolution

    DEFF Research Database (Denmark)

    Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K

    2009-01-01

    exponentially distributed, representing background noise and signal, respectively. Using a saddle-point approximation, Ritchie and others (2007) found normexp to be the best background correction method for 2-color microarray data. This article develops the normexp method further by improving the estimation...... is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data...

  17. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  18. Semiclassical approximation in Batalin-Vilkovisky formalism

    International Nuclear Information System (INIS)

    Schwarz, A.

    1993-01-01

    The geometry of supermanifolds provided with a Q-structure (i.e. with an odd vector field Q satisfying {Q, Q}=0), a P-structure (odd symplectic structure) and an S-structure (volume element) or with various combinations of these structures is studied. The results are applied to the analysis of the Batalin-Vilkovisky approach to the quantization of gauge theories. In particular the semiclassical approximation in this approach is expressed in terms of Reidemeister torsion. (orig.)

  19. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  20. Strategies for comparing gene expression profiles from different microarray platforms: application to a case-control experiment.

    Science.gov (United States)

    Severgnini, Marco; Bicciato, Silvio; Mangano, Eleonora; Scarlatti, Francesca; Mezzelani, Alessandra; Mattioli, Michela; Ghidoni, Riccardo; Peano, Clelia; Bonnal, Raoul; Viti, Federica; Milanesi, Luciano; De Bellis, Gianluca; Battaglia, Cristina

    2006-06-01

    Meta-analysis of microarray data is increasingly important, considering both the availability of multiple platforms using disparate technologies and the accumulation in public repositories of data sets from different laboratories. We addressed the issue of comparing gene expression profiles from two microarray platforms by devising a standardized investigative strategy. We tested this procedure by studying MDA-MB-231 cells, which undergo apoptosis on treatment with resveratrol. Gene expression profiles were obtained using high-density, short-oligonucleotide, single-color microarray platforms: GeneChip (Affymetrix) and CodeLink (Amersham). Interplatform analyses were carried out on 8414 common transcripts represented on both platforms, as identified by LocusLink ID, representing 70.8% and 88.6% of annotated GeneChip and CodeLink features, respectively. We identified 105 differentially expressed genes (DEGs) on CodeLink and 42 DEGs on GeneChip. Among them, only 9 DEGs were commonly identified by both platforms. Multiple analyses (BLAST alignment of probes with target sequences, gene ontology, literature mining, and quantitative real-time PCR) permitted us to investigate the factors contributing to the generation of platform-dependent results in single-color microarray experiments. An effective approach to cross-platform comparison involves microarrays of similar technologies, samples prepared by identical methods, and a standardized battery of bioinformatic and statistical analyses.

  1. The application of DNA microarrays in gene expression analysis.

    Science.gov (United States)

    van Hal, N L; Vorst, O; van Houwelingen, A M; Kok, E J; Peijnenburg, A; Aharoni, A; van Tunen, A J; Keijer, J

    2000-03-31

    DNA microarray technology is a new and powerful technology that will substantially increase the speed of molecular biological research. This paper gives a survey of DNA microarray technology and its use in gene expression studies. The technical aspects and their potential improvements are discussed. These comprise array manufacturing and design, array hybridisation, scanning, and data handling. Furthermore, it is discussed how DNA microarrays can be applied in the working fields of: safety, functionality and health of food and gene discovery and pathway engineering in plants.

  2. RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.

    Science.gov (United States)

    Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor

    2013-07-01

    This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Screening for viral extraneous agents in live-attenuated avian vaccines by using a microbial microarray and sequencing

    DEFF Research Database (Denmark)

    Olesen, Majken Lindholm; Jørgensen, Lotte Leick; Blixenkrone-Møller, Merete

    2018-01-01

    The absence of extraneous agents (EA) in the raw material used for production and in finished products is one of the principal safety elements related to all medicinal products of biological origin, such as live-attenuated vaccines. The aim of this study was to investigate the applicability...... of the Lawrence Livermore Microbial detection array version 2 (LLMDAv2) combined with whole genome amplification and sequencing for screening for viral EAs in live-attenuated vaccines and specific pathogen-free (SPF) eggs.We detected positive microarray signals for avian endogenous retrovirus EAV-HP and several...... viruses belonging to the Alpharetrovirus genus in all analyzed vaccines and SPF eggs. We used a microarray probe mapping approach to evaluate the presence of intact retroviral genomes, which in addition to PCR analysis revealed that several of the positive microarray signals were most likely due to cross...

  4. An MCMC Algorithm for Target Estimation in Real-Time DNA Microarrays

    Directory of Open Access Journals (Sweden)

    Vikalo Haris

    2010-01-01

    Full Text Available DNA microarrays detect the presence and quantify the amounts of nucleic acid molecules of interest. They rely on a chemical attraction between the target molecules and their Watson-Crick complements, which serve as biological sensing elements (probes. The attraction between these biomolecules leads to binding, in which probes capture target analytes. Recently developed real-time DNA microarrays are capable of observing kinetics of the binding process. They collect noisy measurements of the amount of captured molecules at discrete points in time. Molecular binding is a random process which, in this paper, is modeled by a stochastic differential equation. The target analyte quantification is posed as a parameter estimation problem, and solved using a Markov Chain Monte Carlo technique. In simulation studies where we test the robustness with respect to the measurement noise, the proposed technique significantly outperforms previously proposed methods. Moreover, the proposed approach is tested and verified on experimental data.

  5. ESTs, cDNA microarrays, and gene expression profiling: tools for dissecting plant physiology and development.

    Science.gov (United States)

    Alba, Rob; Fei, Zhangjun; Payton, Paxton; Liu, Yang; Moore, Shanna L; Debbie, Paul; Cohn, Jonathan; D'Ascenzo, Mark; Gordon, Jeffrey S; Rose, Jocelyn K C; Martin, Gregory; Tanksley, Steven D; Bouzayen, Mondher; Jahn, Molly M; Giovannoni, Jim

    2004-09-01

    Gene expression profiling holds tremendous promise for dissecting the regulatory mechanisms and transcriptional networks that underlie biological processes. Here we provide details of approaches used by others and ourselves for gene expression profiling in plants with emphasis on cDNA microarrays and discussion of both experimental design and downstream analysis. We focus on methods and techniques emphasizing fabrication of cDNA microarrays, fluorescent labeling, cDNA hybridization, experimental design, and data processing. We include specific examples that demonstrate how this technology can be used to further our understanding of plant physiology and development (specifically fruit development and ripening) and for comparative genomics by comparing transcriptome activity in tomato and pepper fruit.

  6. A pilot study of transcription unit analysis in rice using oligonucleotide tiling-path microarray

    DEFF Research Database (Denmark)

    Stolc, Viktor; Li, Lei; Wang, Xiangfeng

    2005-01-01

    As the international efforts to sequence the rice genome are completed, an immediate challenge and opportunity is to comprehensively and accurately define all transcription units in the rice genome. Here we describe a strategy of using high-density oligonucleotide tiling-path microarrays to map...... transcription of the japonica rice genome. In a pilot experiment to test this approach, one array representing the reverse strand of the last 11.2 Mb sequence of chromosome 10 was analyzed in detail based on a mathematical model developed in this study. Analysis of the array data detected 77% of the reference...... gene models in a mixture of four RNA populations. Moreover, significant transcriptional activities were found in many of the previously annotated intergenic regions. These preliminary results demonstrate the utility of genome tiling microarrays in evaluating annotated rice gene models...

  7. Tissue Microarray Analysis Applied to Bone Diagenesis.

    Science.gov (United States)

    Mello, Rafael Barrios; Silva, Maria Regina Regis; Alves, Maria Teresa Seixas; Evison, Martin Paul; Guimarães, Marco Aurelio; Francisco, Rafaella Arrabaca; Astolphi, Rafael Dias; Iwamura, Edna Sadayo Miazato

    2017-01-04

    Taphonomic processes affecting bone post mortem are important in forensic, archaeological and palaeontological investigations. In this study, the application of tissue microarray (TMA) analysis to a sample of femoral bone specimens from 20 exhumed individuals of known period of burial and age at death is described. TMA allows multiplexing of subsamples, permitting standardized comparative analysis of adjacent sections in 3-D and of representative cross-sections of a large number of specimens. Standard hematoxylin and eosin, periodic acid-Schiff and silver methenamine, and picrosirius red staining, and CD31 and CD34 immunohistochemistry were applied to TMA sections. Osteocyte and osteocyte lacuna counts, percent bone matrix loss, and fungal spheroid element counts could be measured and collagen fibre bundles observed in all specimens. Decalcification with 7% nitric acid proceeded more rapidly than with 0.5 M EDTA and may offer better preservation of histological and cellular structure. No endothelial cells could be detected using CD31 and CD34 immunohistochemistry. Correlation between osteocytes per lacuna and age at death may reflect reported age-related responses to microdamage. Methodological limitations and caveats, and results of the TMA analysis of post mortem diagenesis in bone are discussed, and implications for DNA survival and recovery considered.

  8. Transcriptome analysis of zebrafish embryogenesis using microarrays.

    Directory of Open Access Journals (Sweden)

    Sinnakaruppan Mathavan

    2005-08-01

    Full Text Available Zebrafish (Danio rerio is a well-recognized model for the study of vertebrate developmental genetics, yet at the same time little is known about the transcriptional events that underlie zebrafish embryogenesis. Here we have employed microarray analysis to study the temporal activity of developmentally regulated genes during zebrafish embryogenesis. Transcriptome analysis at 12 different embryonic time points covering five different developmental stages (maternal, blastula, gastrula, segmentation, and pharyngula revealed a highly dynamic transcriptional profile. Hierarchical clustering, stage-specific clustering, and algorithms to detect onset and peak of gene expression revealed clearly demarcated transcript clusters with maximum gene activity at distinct developmental stages as well as co-regulated expression of gene groups involved in dedicated functions such as organogenesis. Our study also revealed a previously unidentified cohort of genes that are transcribed prior to the mid-blastula transition, a time point earlier than when the zygotic genome was traditionally thought to become active. Here we provide, for the first time to our knowledge, a comprehensive list of developmentally regulated zebrafish genes and their expression profiles during embryogenesis, including novel information on the temporal expression of several thousand previously uncharacterized genes. The expression data generated from this study are accessible to all interested scientists from our institute resource database (http://giscompute.gis.a-star.edu.sg/~govind/zebrafish/data_download.html.

  9. Limitations of shallow nets approximation.

    Science.gov (United States)

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A protein microarray for the rapid screening of patients suspected of infection with various food-borne helminthiases.

    Directory of Open Access Journals (Sweden)

    Jia-Xu Chen

    Full Text Available BACKGROUND: Food-borne helminthiases (FBHs have become increasingly important due to frequent occurrence and worldwide distribution. There is increasing demand for developing more sensitive, high-throughput techniques for the simultaneous detection of multiple parasitic diseases due to limitations in differential clinical diagnosis of FBHs with similar symptoms. These infections are difficult to diagnose correctly by conventional diagnostic approaches including serological approaches. METHODOLOGY/PRINCIPAL FINDINGS: In this study, antigens obtained from 5 parasite species, namely Cysticercus cellulosae, Angiostrongylus cantonensis, Paragonimus westermani, Trichinella spiralis and Spirometra sp., were semi-purified after immunoblotting. Sera from 365 human cases of helminthiasis and 80 healthy individuals were assayed with semi-purified antigens by both a protein microarray and the enzyme-linked immunosorbent assay (ELISA. The sensitivity, specificity and simplicity of each test for the end-user were evaluated. The specificity of the tests ranged from 97.0% (95% confidence interval (CI: 95.3-98.7% to 100.0% (95% CI: 100.0% in the protein microarray and from 97.7% (95% CI: 96.2-99.2% to 100.0% (95% CI: 100.0% in ELISA. The sensitivity varied from 85.7% (95% CI: 75.1-96.3% to 92.1% (95% CI: 83.5-100.0% in the protein microarray, while the corresponding values for ELISA were 82.0% (95% CI: 71.4-92.6% to 92.1% (95% CI: 83.5-100.0%. Furthermore, the Youden index spanned from 0.83 to 0.92 in the protein microarray and from 0.80 to 0.92 in ELISA. For each parasite, the Youden index from the protein microarray was often slightly higher than the one from ELISA even though the same antigen was used. CONCLUSIONS/SIGNIFICANCE: The protein microarray platform is a convenient, versatile, high-throughput method that can easily be adapted to massive FBH screening.

  11. Methods for interpreting lists of affected genes obtained in a DNA microarray experiment

    Directory of Open Access Journals (Sweden)

    Hedegaard Jakob

    2009-07-01

    Full Text Available Abstract Background The aim of this paper was to describe and compare the methods used and the results obtained by the participants in a joint EADGENE (European Animal Disease Genomic Network of Excellence and SABRE (Cutting Edge Genomics for Sustainable Animal Breeding workshop focusing on post analysis of microarray data. The participating groups were provided with identical lists of microarray probes, including test statistics for three different contrasts, and the normalised log-ratios for each array, to be used as the starting point for interpreting the affected probes. The data originated from a microarray experiment conducted to study the host reactions in broilers occurring shortly after a secondary challenge with either a homologous or heterologous species of Eimeria. Results Several conceptually different analytical approaches, using both commercial and public available software, were applied by the participating groups. The following tools were used: Ingenuity Pathway Analysis, MAPPFinder, LIMMA, GOstats, GOEAST, GOTM, Globaltest, TopGO, ArrayUnlock, Pathway Studio, GIST and AnnotationDbi. The main focus of the approaches was to utilise the relation between probes/genes and their gene ontology and pathways to interpret the affected probes/genes. The lack of a well-annotated chicken genome did though limit the possibilities to fully explore the tools. The main results from these analyses showed that the biological interpretation is highly dependent on the statistical method used but that some common biological conclusions could be reached. Conclusion It is highly recommended to test different analytical methods on the same data set and compare the results to obtain a reliable biological interpretation of the affected genes in a DNA microarray experiment.

  12. Performance analysis of clustering techniques over microarray data: A case study

    Science.gov (United States)

    Dash, Rasmita; Misra, Bijan Bihari

    2018-03-01

    Handling big data is one of the major issues in the field of statistical data analysis. In such investigation cluster analysis plays a vital role to deal with the large scale data. There are many clustering techniques with different cluster analysis approach. But which approach suits a particular dataset is difficult to predict. To deal with this problem a grading approach is introduced over many clustering techniques to identify a stable technique. But the grading approach depends on the characteristic of dataset as well as on the validity indices. So a two stage grading approach is implemented. In this study the grading approach is implemented over five clustering techniques like hybrid swarm based clustering (HSC), k-means, partitioning around medoids (PAM), vector quantization (VQ) and agglomerative nesting (AGNES). The experimentation is conducted over five microarray datasets with seven validity indices. The finding of grading approach that a cluster technique is significant is also established by Nemenyi post-hoc hypothetical test.

  13. High throughput screening of starch structures using carbohydrate microarrays

    DEFF Research Database (Denmark)

    Tanackovic, Vanja; Rydahl, Maja Gro; Pedersen, Henriette Lodberg

    2016-01-01

    In this study we introduce the starch-recognising carbohydrate binding module family 20 (CBM20) from Aspergillus niger for screening biological variations in starch molecular structure using high throughput carbohydrate microarray technology. Defined linear, branched and phosphorylated...

  14. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    黄承志; 李原芳; 黄新华; 范美坤

    2000-01-01

    The microarray of DNA probes with 5’ -NH2 and 5’ -Tex/3’ -NH2 modified terminus on 10 um carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC) is characterized in the preseni paper. it was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentra-tion of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  15. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The microarray of DNA probes with 5′-NH2 and 5′-Tex/3′-NH2 modified terminus on 10 m m carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)- carbodiimide (EDC) is characterized in the present paper. It was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentration of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  16. Rapid Diagnosis of Bacterial Meningitis Using a Microarray

    Directory of Open Access Journals (Sweden)

    Ren-Jy Ben

    2008-06-01

    Conclusion: The microarray method provides a more accurate and rapid diagnostic tool for bacterial meningitis compared to traditional culture methods. Clinical application of this new technique may reduce the potential risk of delay in treatment.

  17. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  18. Novel Protein Microarray Technology to Examine Men with Prostate Cancer

    National Research Council Canada - National Science Library

    Lilja, Hans

    2005-01-01

    The authors developed a novel macro and nanoporous silicon surface for protein microarrays to facilitate high-throughput biomarker discovery, and high-density protein-chip array analyses of complex biological samples...

  19. Spherical Approximation on Unit Sphere

    Directory of Open Access Journals (Sweden)

    Eman Samir Bhaya

    2018-01-01

    Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of  functions in  spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in    spaces for  by modulus of smoothness of functions.

  20. Universal Reference RNA as a standard for microarray experiments

    Directory of Open Access Journals (Sweden)

    Fero Michael

    2004-03-01

    Full Text Available Abstract Background Obtaining reliable and reproducible two-color microarray gene expression data is critically important for understanding the biological significance of perturbations made on a cellular system. Microarray design, RNA preparation and labeling, hybridization conditions and data acquisition and analysis are variables difficult to simultaneously control. A useful tool for monitoring and controlling intra- and inter-experimental variation is Universal Reference RNA (URR, developed with the goal of providing hybridization signal at each microarray probe location (spot. Measuring signal at each spot as the ratio of experimental RNA to reference RNA targets, rather than relying on absolute signal intensity, decreases variability by normalizing signal output in any two-color hybridization experiment. Results Human, mouse and rat URR (UHRR, UMRR and URRR, respectively were prepared from pools of RNA derived from individual cell lines representing different tissues. A variety of microarrays were used to determine percentage of spots hybridizing with URR and producing signal above a user defined threshold (microarray coverage. Microarray coverage was consistently greater than 80% for all arrays tested. We confirmed that individual cell lines contribute their own unique set of genes to URR, arguing for a pool of RNA from several cell lines as a better configuration for URR as opposed to a single cell line source for URR. Microarray coverage comparing two separately prepared batches each of UHRR, UMRR and URRR were highly correlated (Pearson's correlation coefficients of 0.97. Conclusion Results of this study demonstrate that large quantities of pooled RNA from individual cell lines are reproducibly prepared and possess diverse gene representation. This type of reference provides a standard for reducing variation in microarray experiments and allows more reliable comparison of gene expression data within and between experiments and

  1. Addressable droplet microarrays for single cell protein analysis.

    Science.gov (United States)

    Salehi-Reyhani, Ali; Burgin, Edward; Ces, Oscar; Willison, Keith R; Klug, David R

    2014-11-07

    Addressable droplet microarrays are potentially attractive as a way to achieve miniaturised, reduced volume, high sensitivity analyses without the need to fabricate microfluidic devices or small volume chambers. We report a practical method for producing oil-encapsulated addressable droplet microarrays which can be used for such analyses. To demonstrate their utility, we undertake a series of single cell analyses, to determine the variation in copy number of p53 proteins in cells of a human cancer cell line.

  2. Microarrays for Universal Detection and Identification of Phytoplasmas

    DEFF Research Database (Denmark)

    Nicolaisen, Mogens; Nyskjold, Henriette; Bertaccini, Assunta

    2013-01-01

    Detection and identification of phytoplasmas is a laborious process often involving nested PCR followed by restriction enzyme analysis and fine-resolution gel electrophoresis. To improve throughput, other methods are needed. Microarray technology offers a generic assay that can potentially detect...... and differentiate all types of phytoplasmas in one assay. The present protocol describes a microarray-based method for identification of phytoplasmas to 16Sr group level....

  3. Emerging use of gene expression microarrays in plant physiology.

    Science.gov (United States)

    Wullschleger, Stan D; Difazio, Stephen P

    2003-01-01

    Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology were selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.

  4. Emerging Use of Gene Expression Microarrays in Plant Physiology

    Directory of Open Access Journals (Sweden)

    Stephen P. Difazio

    2006-04-01

    Full Text Available Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology were selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.

  5. Plant-pathogen interactions: what microarray tells about it?

    Science.gov (United States)

    Lodha, T D; Basak, J

    2012-01-01

    Plant defense responses are mediated by elementary regulatory proteins that affect expression of thousands of genes. Over the last decade, microarray technology has played a key role in deciphering the underlying networks of gene regulation in plants that lead to a wide variety of defence responses. Microarray is an important tool to quantify and profile the expression of thousands of genes simultaneously, with two main aims: (1) gene discovery and (2) global expression profiling. Several microarray technologies are currently in use; most include a glass slide platform with spotted cDNA or oligonucleotides. Till date, microarray technology has been used in the identification of regulatory genes, end-point defence genes, to understand the signal transduction processes underlying disease resistance and its intimate links to other physiological pathways. Microarray technology can be used for in-depth, simultaneous profiling of host/pathogen genes as the disease progresses from infection to resistance/susceptibility at different developmental stages of the host, which can be done in different environments, for clearer understanding of the processes involved. A thorough knowledge of plant disease resistance using successful combination of microarray and other high throughput techniques, as well as biochemical, genetic, and cell biological experiments is needed for practical application to secure and stabilize yield of many crop plants. This review starts with a brief introduction to microarray technology, followed by the basics of plant-pathogen interaction, the use of DNA microarrays over the last decade to unravel the mysteries of plant-pathogen interaction, and ends with the future prospects of this technology.

  6. Application of broad-spectrum resequencing microarray for genotyping rhabdoviruses.

    Science.gov (United States)

    Dacheux, Laurent; Berthet, Nicolas; Dissard, Gabriel; Holmes, Edward C; Delmas, Olivier; Larrous, Florence; Guigon, Ghislaine; Dickinson, Philip; Faye, Ousmane; Sall, Amadou A; Old, Iain G; Kong, Katherine; Kennedy, Giulia C; Manuguerra, Jean-Claude; Cole, Stewart T; Caro, Valérie; Gessain, Antoine; Bourhy, Hervé

    2010-09-01

    The rapid and accurate identification of pathogens is critical in the control of infectious disease. To this end, we analyzed the capacity for viral detection and identification of a newly described high-density resequencing microarray (RMA), termed PathogenID, which was designed for multiple pathogen detection using database similarity searching. We focused on one of the largest and most diverse viral families described to date, the family Rhabdoviridae. We demonstrate that this approach has the potential to identify both known and related viruses for which precise sequence information is unavailable. In particular, we demonstrate that a strategy based on consensus sequence determination for analysis of RMA output data enabled successful detection of viruses exhibiting up to 26% nucleotide divergence with the closest sequence tiled on the array. Using clinical specimens obtained from rabid patients and animals, this method also shows a high species level concordance with standard reference assays, indicating that it is amenable for the development of diagnostic assays. Finally, 12 animal rhabdoviruses which were currently unclassified, unassigned, or assigned as tentative species within the family Rhabdoviridae were successfully detected. These new data allowed an unprecedented phylogenetic analysis of 106 rhabdoviruses and further suggest that the principles and methodology developed here may be used for the broad-spectrum surveillance and the broader-scale investigation of biodiversity in the viral world.

  7. Application of Broad-Spectrum Resequencing Microarray for Genotyping Rhabdoviruses▿

    Science.gov (United States)

    Dacheux, Laurent; Berthet, Nicolas; Dissard, Gabriel; Holmes, Edward C.; Delmas, Olivier; Larrous, Florence; Guigon, Ghislaine; Dickinson, Philip; Faye, Ousmane; Sall, Amadou A.; Old, Iain G.; Kong, Katherine; Kennedy, Giulia C.; Manuguerra, Jean-Claude; Cole, Stewart T.; Caro, Valérie; Gessain, Antoine; Bourhy, Hervé

    2010-01-01

    The rapid and accurate identification of pathogens is critical in the control of infectious disease. To this end, we analyzed the capacity for viral detection and identification of a newly described high-density resequencing microarray (RMA), termed PathogenID, which was designed for multiple pathogen detection using database similarity searching. We focused on one of the largest and most diverse viral families described to date, the family Rhabdoviridae. We demonstrate that this approach has the potential to identify both known and related viruses for which precise sequence information is unavailable. In particular, we demonstrate that a strategy based on consensus sequence determination for analysis of RMA output data enabled successful detection of viruses exhibiting up to 26% nucleotide divergence with the closest sequence tiled on the array. Using clinical specimens obtained from rabid patients and animals, this method also shows a high species level concordance with standard reference assays, indicating that it is amenable for the development of diagnostic assays. Finally, 12 animal rhabdoviruses which were currently unclassified, unassigned, or assigned as tentative species within the family Rhabdoviridae were successfully detected. These new data allowed an unprecedented phylogenetic analysis of 106 rhabdoviruses and further suggest that the principles and methodology developed here may be used for the broad-spectrum surveillance and the broader-scale investigation of biodiversity in the viral world. PMID:20610710

  8. Analysis of Chromothripsis by Combined FISH and Microarray Analysis.

    Science.gov (United States)

    MacKinnon, Ruth N

    2018-01-01

    Fluorescence in situ hybridization (FISH) to metaphase chromosomes, in conjunction with SNP array, array CGH, or whole genome sequencing, can help determine the organization of abnormal genomes after chromothripsis and other types of complex genome rearrangement. DNA microarrays can identify the changes in copy number, but they do not give information on the organization of the abnormal chromosomes, balanced rearrangements, or abnormalities of the centromeres and other regions comprised of highly repetitive DNA. Many of these details can be determined by the strategic use of metaphase FISH. FISH is a single-cell technique, so it can identify low-frequency chromosome abnormalities, and it can determine which chromosome abnormalities occur in the same or different clonal populations. These are important considerations in cancer. Metaphase chromosomes are intact, so information about abnormalities of the chromosome homologues is preserved. Here we describe strategies for working out the organization of highly rearranged genomes by combining SNP array data with various metaphase FISH methods. This approach can also be used to address some of the uncertainties arising from whole genome or mate-pair sequencing data.

  9. Deciphering cellular morphology and biocompatibility using polymer microarrays

    International Nuclear Information System (INIS)

    Pernagallo, Salvatore; Unciti-Broceta, Asier; DIaz-Mochon, Juan Jose; Bradley, Mark

    2008-01-01

    A quantitative and qualitative analysis of cellular adhesion, morphology and viability is essential in understanding and designing biomaterials such as those involved in implant surfaces or as tissue-engineering scaffolds. As a means to simultaneously perform these studies in a high-throughput (HT) manner, we report a normalized protocol which allows the rapid analysis of a large number of potential cell binding substrates using polymer microarrays and high-content fluorescence microscopy. The method was successfully applied to the discovery of optimal polymer substrates from a 214-member polyurethane library with mouse fibroblast cells (L929), as well as simultaneous evaluation of cell viability and cellular morphology. Analysis demonstrated high biocompatibility of the binding polymers and permitted the identification of several different cellular morphologies, showing that specific polymer interactions may provoke changes in cell shape. In addition, SAR studies showed a clear correspondence between cellular adhesion and polymer structure. The approach can be utilized to perform multiple experiments (up to 1024 single experiments per slide) in a highly reproducible manner, leading to the generation of vast amounts of data in a short time period (48-72 h) while reducing dramatically the quantities of polymers, reagents and cells used

  10. Application of microarray analysis on computer cluster and cloud platforms.

    Science.gov (United States)

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  11. Approximate circuits for increased reliability

    Science.gov (United States)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  12. Aspects of three field approximations: Darwin, frozen, EMPULSE

    International Nuclear Information System (INIS)

    Boyd, J.K.; Lee, E.P.; Yu, S.S.

    1985-01-01

    The traditional approach used to study high energy beam propagation relies on the frozen field approximation. A minor modification of the frozen field approximation yields the set of equations applied to the analysis of the hose instability. These models are constrasted with the Darwin field approximation. A statement is made of the Darwin model equations relevant to the analysis of the hose instability

  13. A comparative analysis of DNA barcode microarray feature size

    Directory of Open Access Journals (Sweden)

    Smith Andrew M

    2009-10-01

    Full Text Available Abstract Background Microarrays are an invaluable tool in many modern genomic studies. It is generally perceived that decreasing the size of microarray features leads to arrays with higher resolution (due to greater feature density, but this increase in resolution can compromise sensitivity. Results We demonstrate that barcode microarrays with smaller features are equally capable of detecting variation in DNA barcode intensity when compared to larger feature sizes within a specific microarray platform. The barcodes used in this study are the well-characterized set derived from the Yeast KnockOut (YKO collection used for screens of pooled yeast (Saccharomyces cerevisiae deletion mutants. We treated these pools with the glycosylation inhibitor tunicamycin as a test compound. Three generations of barcode microarrays at 30, 8 and 5 μm features sizes independently identified the primary target of tunicamycin to be ALG7. Conclusion We show that the data obtained with 5 μm feature size is of comparable quality to the 30 μm size and propose that further shrinking of features could yield barcode microarrays with equal or greater resolving power and, more importantly, higher density.

  14. Assessing Bacterial Interactions Using Carbohydrate-Based Microarrays

    Directory of Open Access Journals (Sweden)

    Andrea Flannery

    2015-12-01

    Full Text Available Carbohydrates play a crucial role in host-microorganism interactions and many host glycoconjugates are receptors or co-receptors for microbial binding. Host glycosylation varies with species and location in the body, and this contributes to species specificity and tropism of commensal and pathogenic bacteria. Additionally, bacterial glycosylation is often the first bacterial molecular species encountered and responded to by the host system. Accordingly, characterising and identifying the exact structures involved in these critical interactions is an important priority in deciphering microbial pathogenesis. Carbohydrate-based microarray platforms have been an underused tool for screening bacterial interactions with specific carbohydrate structures, but they are growing in popularity in recent years. In this review, we discuss carbohydrate-based microarrays that have been profiled with whole bacteria, recombinantly expressed adhesins or serum antibodies. Three main types of carbohydrate-based microarray platform are considered; (i conventional carbohydrate or glycan microarrays; (ii whole mucin microarrays; and (iii microarrays constructed from bacterial polysaccharides or their components. Determining the nature of the interactions between bacteria and host can help clarify the molecular mechanisms of carbohydrate-mediated interactions in microbial pathogenesis, infectious disease and host immune response and may lead to new strategies to boost therapeutic treatments.

  15. Unambiguous results from variational matrix Pade approximants

    International Nuclear Information System (INIS)

    Pindor, Maciej.

    1979-10-01

    Variational Matrix Pade Approximants are studied as a nonlinear variational problem. It is shown that although a stationary value of the Schwinger functional is a stationary value of VMPA, the latter has also another stationary value. It is therefore proposed that instead of looking for a stationary point of VMPA, one minimizes some non-negative functional and then one calculates VMPA at the point where the former has the absolute minimum. This approach, which we call the Method of the Variational Gradient (MVG) gives unambiguous results and is also shown to minimize a distance between the approximate and the exact stationary values of the Schwinger functional

  16. The efficiency of Flory approximation

    International Nuclear Information System (INIS)

    Obukhov, S.P.

    1984-01-01

    The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)

  17. How large a training set is needed to develop a classifier for microarray data?

    Science.gov (United States)

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  18. Prediction of transcriptional regulatory elements for plant hormone responses based on microarray data

    Directory of Open Access Journals (Sweden)

    Yamaguchi-Shinozaki Kazuko

    2011-02-01

    Full Text Available Abstract Background Phytohormones organize plant development and environmental adaptation through cell-to-cell signal transduction, and their action involves transcriptional activation. Recent international efforts to establish and maintain public databases of Arabidopsis microarray data have enabled the utilization of this data in the analysis of various phytohormone responses, providing genome-wide identification of promoters targeted by phytohormones. Results We utilized such microarray data for prediction of cis-regulatory elements with an octamer-based approach. Our test prediction of a drought-responsive RD29A promoter with the aid of microarray data for response to drought, ABA and overexpression of DREB1A, a key regulator of cold and drought response, provided reasonable results that fit with the experimentally identified regulatory elements. With this succession, we expanded the prediction to various phytohormone responses, including those for abscisic acid, auxin, cytokinin, ethylene, brassinosteroid, jasmonic acid, and salicylic acid, as well as for hydrogen peroxide, drought and DREB1A overexpression. Totally 622 promoters that are activated by phytohormones were subjected to the prediction. In addition, we have assigned putative functions to 53 octamers of the Regulatory Element Group (REG that have been extracted as position-dependent cis-regulatory elements with the aid of their feature of preferential appearance in the promoter region. Conclusions Our prediction of Arabidopsis cis-regulatory elements for phytohormone responses provides guidance for experimental analysis of promoters to reveal the basis of the transcriptional network of phytohormone responses.

  19. Systematic validation and atomic force microscopy of non-covalent short oligonucleotide barcode microarrays.

    Directory of Open Access Journals (Sweden)

    Michael A Cook

    Full Text Available BACKGROUND: Molecular barcode arrays provide a powerful means to analyze cellular phenotypes in parallel through detection of short (20-60 base unique sequence tags, or "barcodes", associated with each strain or clone in a collection. However, costs of current methods for microarray construction, whether by in situ oligonucleotide synthesis or ex situ coupling of modified oligonucleotides to the slide surface are often prohibitive to large-scale analyses. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that unmodified 20mer oligonucleotide probes printed on conventional surfaces show comparable hybridization signals to covalently linked 5'-amino-modified probes. As a test case, we undertook systematic cell size analysis of the budding yeast Saccharomyces cerevisiae genome-wide deletion collection by size separation of the deletion pool followed by determination of strain abundance in size fractions by barcode arrays. We demonstrate that the properties of a 13K unique feature spotted 20 mer oligonucleotide barcode microarray compare favorably with an analogous covalently-linked oligonucleotide array. Further, cell size profiles obtained with the size selection/barcode array approach recapitulate previous cell size measurements of individual deletion strains. Finally, through atomic force microscopy (AFM, we characterize the mechanism of hybridization to unmodified barcode probes on the slide surface. CONCLUSIONS/SIGNIFICANCE: These studies push the lower limit of probe size in genome-scale unmodified oligonucleotide microarray construction and demonstrate a versatile, cost-effective and reliable method for molecular barcode analysis.

  20. ZODET: software for the identification, analysis and visualisation of outlier genes in microarray expression data.

    Directory of Open Access Journals (Sweden)

    Daniel L Roden

    Full Text Available Complex human diseases can show significant heterogeneity between patients with the same phenotypic disorder. An outlier detection strategy was developed to identify variants at the level of gene transcription that are of potential biological and phenotypic importance. Here we describe a graphical software package (z-score outlier detection (ZODET that enables identification and visualisation of gross abnormalities in gene expression (outliers in individuals, using whole genome microarray data. Mean and standard deviation of expression in a healthy control cohort is used to detect both over and under-expressed probes in individual test subjects. We compared the potential of ZODET to detect outlier genes in gene expression datasets with a previously described statistical method, gene tissue index (GTI, using a simulated expression dataset and a publicly available monocyte-derived macrophage microarray dataset. Taken together, these results support ZODET as a novel approach to identify outlier genes of potential pathogenic relevance in complex human diseases. The algorithm is implemented using R packages and Java.The software is freely available from http://www.ucl.ac.uk/medicine/molecular-medicine/publications/microarray-outlier-analysis.

  1. permGPU: Using graphics processing units in RNA microarray association studies

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2010-06-01

    Full Text Available Abstract Background Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. Results We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. Conclusions permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  2. Training ANFIS structure using genetic algorithm for liver cancer classification based on microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Bülent Haznedar

    2017-02-01

    Full Text Available Classification is an important data mining technique, which is used in many fields mostly exemplified as medicine, genetics and biomedical engineering. The number of studies about classification of the datum on DNA microarray gene expression is specifically increased in recent years. However, because of the reasons as the abundance of gene numbers in the datum as microarray gene expressions and the nonlinear relations mostly across those datum, the success of conventional classification algorithms can be limited. Because of these reasons, the interest on classification methods which are based on artificial intelligence to solve the problem on classification has been gradually increased in recent times. In this study, a hybrid approach which is based on Adaptive Neuro-Fuzzy Inference System (ANFIS and Genetic Algorithm (GA are suggested in order to classify liver microarray cancer data set. Simulation results are compared with the results of other methods. According to the results obtained, it is seen that the recommended method is better than the other methods.

  3. Microarray MAPH: accurate array-based detection of relative copy number in genomic DNA.

    Science.gov (United States)

    Gibbons, Brian; Datta, Parikkhit; Wu, Ying; Chan, Alan; Al Armour, John

    2006-06-30

    Current methods for measurement of copy number do not combine all the desirable qualities of convenience, throughput, economy, accuracy and resolution. In this study, to improve the throughput associated with Multiplex Amplifiable Probe Hybridisation (MAPH) we aimed to develop a modification based on the 3-Dimensional, Flow-Through Microarray Platform from PamGene International. In this new method, electrophoretic analysis of amplified products is replaced with photometric analysis of a probed oligonucleotide array. Copy number analysis of hybridised probes is based on a dual-label approach by comparing the intensity of Cy3-labelled MAPH probes amplified from test samples co-hybridised with similarly amplified Cy5-labelled reference MAPH probes. The key feature of using a hybridisation-based end point with MAPH is that discrimination of amplified probes is based on sequence and not fragment length. In this study we showed that microarray MAPH measurement of PMP22 gene dosage correlates well with PMP22 gene dosage determined by capillary MAPH and that copy number was accurately reported in analyses of DNA from 38 individuals, 12 of which were known to have Charcot-Marie-Tooth disease type 1A (CMT1A). Measurement of microarray-based endpoints for MAPH appears to be of comparable accuracy to electrophoretic methods, and holds the prospect of fully exploiting the potential multiplicity of MAPH. The technology has the potential to simplify copy number assays for genes with a large number of exons, or of expanded sets of probes from dispersed genomic locations.

  4. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Mutation analysis of 272 Spanish families affected by autosomal recessive retinitis pigmentosa using a genotyping microarray.

    Science.gov (United States)

    Ávila-Fernández, Almudena; Cantalapiedra, Diego; Aller, Elena; Vallespín, Elena; Aguirre-Lambán, Jana; Blanco-Kelly, Fiona; Corton, M; Riveiro-Álvarez, Rosa; Allikmets, Rando; Trujillo-Tiebas, María José; Millán, José M; Cremers, Frans P M; Ayuso, Carmen

    2010-12-03

    Retinitis pigmentosa (RP) is a genetically heterogeneous disorder characterized by progressive loss of vision. The aim of this study was to identify the causative mutations in 272 Spanish families using a genotyping microarray. 272 unrelated Spanish families, 107 with autosomal recessive RP (arRP) and 165 with sporadic RP (sRP), were studied using the APEX genotyping microarray. The families were also classified by clinical criteria: 86 juveniles and 186 typical RP families. Haplotype and sequence analysis were performed to identify the second mutated allele. At least one-gene variant was found in 14% and 16% of the juvenile and typical RP groups respectively. Further study identified four new mutations, providing both causative changes in 11% of the families. Retinol Dehydrogenase 12 (RDH12) was the most frequently mutated gene in the juvenile RP group, and Usher Syndrome 2A (USH2A) and Ceramide Kinase-Like (CERKL) were the most frequently mutated genes in the typical RP group. The only variant found in CERKL was p.Arg257Stop, the most frequent mutation. The genotyping microarray combined with segregation and sequence analysis allowed us to identify the causative mutations in 11% of the families. Due to the low number of characterized families, this approach should be used in tandem with other techniques.

  6. Array2BIO: from microarray expression data to functional annotation of co-regulated genes

    Directory of Open Access Journals (Sweden)

    Rasley Amy

    2006-06-01

    Full Text Available Abstract Background There are several isolated tools for partial analysis of microarray expression data. To provide an integrative, easy-to-use and automated toolkit for the analysis of Affymetrix microarray expression data we have developed Array2BIO, an application that couples several analytical methods into a single web based utility. Results Array2BIO converts raw intensities into probe expression values, automatically maps those to genes, and subsequently identifies groups of co-expressed genes using two complementary approaches: (1 comparative analysis of signal versus control and (2 clustering analysis of gene expression across different conditions. The identified genes are assigned to functional categories based on Gene Ontology classification and KEGG protein interaction pathways. Array2BIO reliably handles low-expressor genes and provides a set of statistical methods for quantifying expression levels, including Benjamini-Hochberg and Bonferroni multiple testing corrections. An automated interface with the ECR Browser provides evolutionary conservation analysis for the identified gene loci while the interconnection with Crème allows prediction of gene regulatory elements that underlie observed expression patterns. Conclusion We have developed Array2BIO – a web based tool for rapid comprehensive analysis of Affymetrix microarray expression data, which also allows users to link expression data to Dcode.org comparative genomics tools and integrates a system for translating co-expression data into mechanisms of gene co-regulation. Array2BIO is publicly available at http://array2bio.dcode.org.

  7. Modified semiclassical approximation for trapped Bose gases

    International Nuclear Information System (INIS)

    Yukalov, V.I.

    2005-01-01

    A generalization of the semiclassical approximation is suggested allowing for an essential extension of its region of applicability. In particular, it becomes possible to describe Bose-Einstein condensation of a trapped gas in low-dimensional traps and in traps of low confining dimensions, for which the standard semiclassical approximation is not applicable. The result of the modified approach is shown to coincide with purely quantum-mechanical calculations for harmonic traps, including the one-dimensional harmonic trap. The advantage of the semiclassical approximation is in its simplicity and generality. Power-law potentials of arbitrary powers are considered. The effective thermodynamic limit is defined for any confining dimension. The behavior of the specific heat, isothermal compressibility, and density fluctuations is analyzed, with an emphasis on low confining dimensions, where the usual semiclassical method fails. The peculiarities of the thermodynamic characteristics in the effective thermodynamic limit are discussed

  8. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  9. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  10. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  11. Graph Based Study of Allergen Cross-Reactivity of Plant Lipid Transfer Proteins (LTPs) Using Microarray in a Multicenter Study

    Science.gov (United States)

    Palacín, Arantxa; Gómez-Casado, Cristina; Rivas, Luis A.; Aguirre, Jacobo; Tordesillas, Leticia; Bartra, Joan; Blanco, Carlos; Carrillo, Teresa; Cuesta-Herranz, Javier; de Frutos, Consolación; Álvarez-Eire, Genoveva García; Fernández, Francisco J.; Gamboa, Pedro; Muñoz, Rosa; Sánchez-Monge, Rosa; Sirvent, Sofía; Torres, María J.; Varela-Losada, Susana; Rodríguez, Rosalía; Parro, Victor; Blanca, Miguel; Salcedo, Gabriel; Díaz-Perales, Araceli

    2012-01-01

    The study of cross-reactivity in allergy is key to both understanding. the allergic response of many patients and providing them with a rational treatment In the present study, protein microarrays and a co-sensitization graph approach were used in conjunction with an allergen microarray immunoassay. This enabled us to include a wide number of proteins and a large number of patients, and to study sensitization profiles among members of the LTP family. Fourteen LTPs from the most frequent plant food-induced allergies in the geographical area studied were printed into a microarray specifically designed for this research. 212 patients with fruit allergy and 117 food-tolerant pollen allergic subjects were recruited from seven regions of Spain with different pollen profiles, and their sera were tested with allergen microarray. This approach has proven itself to be a good tool to study cross-reactivity between members of LTP family, and could become a useful strategy to analyze other families of allergens. PMID:23272072

  12. Graph based study of allergen cross-reactivity of plant lipid transfer proteins (LTPs using microarray in a multicenter study.

    Directory of Open Access Journals (Sweden)

    Arantxa Palacín

    Full Text Available The study of cross-reactivity in allergy is key to both understanding. the allergic response of many patients and providing them with a rational treatment In the present study, protein microarrays and a co-sensitization graph approach were used in conjunction with an allergen microarray immunoassay. This enabled us to include a wide number of proteins and a large number of patients, and to study sensitization profiles among members of the LTP family. Fourteen LTPs from the most frequent plant food-induced allergies in the geographical area studied were printed into a microarray specifically designed for this research. 212 patients with fruit allergy and 117 food-tolerant pollen allergic subjects were recruited from seven regions of Spain with different pollen profiles, and their sera were tested with allergen microarray. This approach has proven itself to be a good tool to study cross-reactivity between members of LTP family, and could become a useful strategy to analyze other families of allergens.

  13. Evaluation of Different Normalization and Analysis Procedures for Illumina Gene Expression Microarray Data Involving Small Changes

    Science.gov (United States)

    Johnstone, Daniel M.; Riveros, Carlos; Heidari, Moones; Graham, Ross M.; Trinder, Debbie; Berretta, Regina; Olynyk, John K.; Scott, Rodney J.; Moscato, Pablo; Milward, Elizabeth A.

    2013-01-01

    While Illumina microarrays can be used successfully for detecting small gene expression changes due to their high degree of technical replicability, there is little information on how different normalization and differential expression analysis strategies affect outcomes. To evaluate this, we assessed concordance across gene lists generated by applying different combinations of normalization strategy and analytical approach to two Illumina datasets with modest expression changes. In addition to using traditional statistical approaches, we also tested an approach based on combinatorial optimization. We found that the choice of both normalization strategy and analytical approach considerably affected outcomes, in some cases leading to substantial differences in gene lists and subsequent pathway analysis results. Our findings suggest that important biological phenomena may be overlooked when there is a routine practice of using only one approach to investigate all microarray datasets. Analytical artefacts of this kind are likely to be especially relevant for datasets involving small fold changes, where inherent technical variation—if not adequately minimized by effective normalization—may overshadow true biological variation. This report provides some basic guidelines for optimizing outcomes when working with Illumina datasets involving small expression changes. PMID:27605185

  14. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  15. Mean-field approximation minimizes relative entropy

    International Nuclear Information System (INIS)

    Bilbro, G.L.; Snyder, W.E.; Mann, R.C.

    1991-01-01

    The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach

  16. Approximate modal analysis using Fourier decomposition

    International Nuclear Information System (INIS)

    Kozar, Ivica; Jericevic, Zeljko; Pecak, Tatjana

    2010-01-01

    The paper presents a novel numerical approach for approximate solution of eigenvalue problem and investigates its suitability for modal analysis of structures with special attention on plate structures. The approach is based on Fourier transformation of the matrix equation into frequency domain and subsequent removal of potentially less significant frequencies. The procedure results in a much reduced problem that is used in eigenvalue calculation. After calculation eigenvectors are expanded and transformed back into time domain. The principles are presented in Jericevic [1]. Fourier transform can be formulated in a way that some parts of the matrix that should not be approximated are not transformed but are fully preserved. In this paper we present formulation that preserves central or edge parts of the matrix and compare it with the formulation that performs transform on the whole matrix. Numerical experiments on transformed structural dynamic matrices describe quality of the approximations obtained in modal analysis of structures. On the basis of the numerical experiments, from the three approaches to matrix reduction one is recommended.

  17. Nuclear Hartree-Fock approximation testing and other related approximations

    International Nuclear Information System (INIS)

    Cohenca, J.M.

    1970-01-01

    Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt

  18. Significance analysis of lexical bias in microarray data

    Directory of Open Access Journals (Sweden)

    Falkow Stanley

    2003-04-01

    Full Text Available Abstract Background Genes that are determined to be significantly differentially regulated in microarray analyses often appear to have functional commonalities, such as being components of the same biochemical pathway. This results in certain words being under- or overrepresented in the list of genes. Distinguishing between biologically meaningful trends and artifacts of annotation and analysis procedures is of the utmost importance, as only true biological trends are of interest for further experimentation. A number of sophisticated methods for identification of significant lexical trends are currently available, but these methods are generally too cumbersome for practical use by most microarray users. Results We have developed a tool, LACK, for calculating the statistical significance of apparent lexical bias in microarray datasets. The frequency of a user-specified list of search terms in a list of genes which are differentially regulated is assessed for statistical significance by comparison to randomly generated datasets. The simplicity of the input files and user interface targets the average microarray user who wishes to have a statistical measure of apparent lexical trends in analyzed datasets without the need for bioinformatics skills. The software is available as Perl source or a Windows executable. Conclusion We have used LACK in our laboratory to generate biological hypotheses based on our microarray data. We demonstrate the program's utility using an example in which we confirm significant upregulation of SPI-2 pathogenicity island of Salmonella enterica serovar Typhimurium by the cation chelator dipyridyl.

  19. A Fisheye Viewer for microarray-based gene expression data.

    Science.gov (United States)

    Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V

    2006-10-13

    Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface--an electronic table (E-table) that uses fisheye distortion technology. The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site http://polaris.imt.uwm.edu:7777/fisheye/. The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table.

  20. A fisheye viewer for microarray-based gene expression data

    Directory of Open Access Journals (Sweden)

    Munson Ethan V

    2006-10-01

    Full Text Available Abstract Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site http://polaris.imt.uwm.edu:7777/fisheye/. The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table.

  1. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  2. Advanced Data Mining of Leukemia Cells Micro-Arrays

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2009-12-01

    Full Text Available This paper provides continuation and extensions of previous research by Segall and Pierce (2009a that discussed data mining for micro-array databases of Leukemia cells for primarily self-organized maps (SOM. As Segall and Pierce (2009a and Segall and Pierce (2009b the results of applying data mining are shown and discussed for the data categories of microarray databases of HL60, Jurkat, NB4 and U937 Leukemia cells that are also described in this article. First, a background section is provided on the work of others pertaining to the applications of data mining to micro-array databases of Leukemia cells and micro-array databases in general. As noted in predecessor article by Segall and Pierce (2009a, micro-array databases are one of the most popular functional genomics tools in use today. This research in this paper is intended to use advanced data mining technologies for better interpretations and knowledge discovery as generated by the patterns of gene expressions of HL60, Jurkat, NB4 and U937 Leukemia cells. The advanced data mining performed entailed using other data mining tools such as cubic clustering criterion, variable importance rankings, decision trees, and more detailed examinations of data mining statistics and study of other self-organized maps (SOM clustering regions of workspace as generated by SAS Enterprise Miner version 4. Conclusions and future directions of the research are also presented.

  3. Probe Selection for DNA Microarrays using OligoWiz

    DEFF Research Database (Denmark)

    Wernersson, Rasmus; Juncker, Agnieszka; Nielsen, Henrik Bjørn

    2007-01-01

    Nucleotide abundance measurements using DNA microarray technology are possible only if appropriate probes complementary to the target nucleotides can be identified. Here we present a protocol for selecting DNA probes for microarrays using the OligoWiz application. OligoWiz is a client-server appl......Nucleotide abundance measurements using DNA microarray technology are possible only if appropriate probes complementary to the target nucleotides can be identified. Here we present a protocol for selecting DNA probes for microarrays using the OligoWiz application. OligoWiz is a client......-server application that offers a detailed graphical interface and real-time user interaction on the client side, and massive computer power and a large collection of species databases (400, summer 2007) on the server side. Probes are selected according to five weighted scores: cross-hybridization, deltaT(m), folding...... computer skills and can be executed from any Internet-connected computer. The probe selection procedure for a standard microarray design targeting all yeast transcripts can be completed in 1 h....

  4. Microarray-based screening of heat shock protein inhibitors.

    Science.gov (United States)

    Schax, Emilia; Walter, Johanna-Gabriela; Märzhäuser, Helene; Stahl, Frank; Scheper, Thomas; Agard, David A; Eichner, Simone; Kirschning, Andreas; Zeilinger, Carsten

    2014-06-20

    Based on the importance of heat shock proteins (HSPs) in diseases such as cancer, Alzheimer's disease or malaria, inhibitors of these chaperons are needed. Today's state-of-the-art techniques to identify HSP inhibitors are performed in microplate format, requiring large amounts of proteins and potential inhibitors. In contrast, we have developed a miniaturized protein microarray-based assay to identify novel inhibitors, allowing analysis with 300 pmol of protein. The assay is based on competitive binding of fluorescence-labeled ATP and potential inhibitors to the ATP-binding site of HSP. Therefore, the developed microarray enables the parallel analysis of different ATP-binding proteins on a single microarray. We have demonstrated the possibility of multiplexing by immobilizing full-length human HSP90α and HtpG of Helicobacter pylori on microarrays. Fluorescence-labeled ATP was competed by novel geldanamycin/reblastatin derivatives with IC50 values in the range of 0.5 nM to 4 μM and Z(*)-factors between 0.60 and 0.96. Our results demonstrate the potential of a target-oriented multiplexed protein microarray to identify novel inhibitors for different members of the HSP90 family. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Approximation by max-product type operators

    CERN Document Server

    Bede, Barnabás; Gal, Sorin G

    2016-01-01

    This monograph presents a broad treatment of developments in an area of constructive approximation involving the so-called "max-product" type operators. The exposition highlights the max-product operators as those which allow one to obtain, in many cases, more valuable estimates than those obtained by classical approaches. The text considers a wide variety of operators which are studied for a number of interesting problems such as quantitative estimates, convergence, saturation results, localization, to name several. Additionally, the book discusses the perfect analogies between the probabilistic approaches of the classical Bernstein type operators and of the classical convolution operators (non-periodic and periodic cases), and the possibilistic approaches of the max-product variants of these operators. These approaches allow for two natural interpretations of the max-product Bernstein type operators and convolution type operators: firstly, as possibilistic expectations of some fuzzy variables, and secondly,...

  6. Development of a high-throughput microfluidic integrated microarray for the detection of chimeric bioweapons.

    Energy Technology Data Exchange (ETDEWEB)

    Sheppod, Timothy; Satterfield, Brent; Hukari, Kyle W.; West, Jason A. A.; Hux, Gary A.

    2006-10-01

    The advancement of DNA cloning has significantly augmented the potential threat of a focused bioweapon assault, such as a terrorist attack. With current DNA cloning techniques, toxin genes from the most dangerous (but environmentally labile) bacterial or viral organism can now be selected and inserted into robust organism to produce an infinite number of deadly chimeric bioweapons. In order to neutralize such a threat, accurate detection of the expressed toxin genes, rather than classification on strain or genealogical decent of these organisms, is critical. The development of a high-throughput microarray approach will enable the detection of unknowns chimeric bioweapons. The development of a high-throughput microarray approach will enable the detection of unknown bioweapons. We have developed a unique microfluidic approach to capture and concentrate these threat genes (mRNA's) upto a 30 fold concentration. These captured oligonucleotides can then be used to synthesize in situ oligonucleotide copies (cDNA probes) of the captured genes. An integrated microfluidic architecture will enable us to control flows of reagents, perform clean-up steps and finally elute nanoliter volumes of synthesized oligonucleotides probes. The integrated approach has enabled a process where chimeric or conventional bioweapons can rapidly be identified based on their toxic function, rather than being restricted to information that may not identify the critical nature of the threat.

  7. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  8. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  9. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  10. Rational approximations for tomographic reconstructions

    International Nuclear Information System (INIS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-01-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)

  11. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  12. Short-term arginine deprivation results in large-scale modulation of hepatic gene expression in both normal and tumor cells: microarray bioinformatic analysis

    Directory of Open Access Journals (Sweden)

    Sabo Edmond

    2006-09-01

    Full Text Available Abstract Background We have reported arginine-sensitive regulation of LAT1 amino acid transporter (SLC 7A5 in normal rodent hepatic cells with loss of arginine sensitivity and high level constitutive expression in tumor cells. We hypothesized that liver cell gene expression is highly sensitive to alterations in the amino acid microenvironment and that tumor cells may differ substantially in gene sets sensitive to amino acid availability. To assess the potential number and classes of hepatic genes sensitive to arginine availability at the RNA level and compare these between normal and tumor cells, we used an Affymetrix microarray approach, a paired in vitro model of normal rat hepatic cells and a tumorigenic derivative with triplicate independent replicates. Cells were exposed to arginine-deficient or control conditions for 18 hours in medium formulated to maintain differentiated function. Results Initial two-way analysis with a p-value of 0.05 identified 1419 genes in normal cells versus 2175 in tumor cells whose expression was altered in arginine-deficient conditions relative to controls, representing 9–14% of the rat genome. More stringent bioinformatic analysis with 9-way comparisons and a minimum of 2-fold variation narrowed this set to 56 arginine-responsive genes in normal liver cells and 162 in tumor cells. Approximately half the arginine-responsive genes in normal cells overlap with those in tumor cells. Of these, the majority was increased in expression and included multiple growth, survival, and stress-related genes. GADD45, TA1/LAT1, and caspases 11 and 12 were among this group. Previously known amino acid regulated genes were among the pool in both cell types. Available cDNA probes allowed independent validation of microarray data for multiple genes. Among genes downregulated under arginine-deficient conditions were multiple genes involved in cholesterol and fatty acid metabolism. Expression of low-density lipoprotein receptor was

  13. Approximate reasoning in physical systems

    International Nuclear Information System (INIS)

    Mutihac, R.

    1991-01-01

    The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)

  14. Which Members of the Microbial Communities Are Active? Microarrays

    Science.gov (United States)

    Morris, Brandon E. L.

    Here, we introduce the concept of microarrays, discuss the advantages of several different types of arrays and present a case study that illustrates a targeted-profiling approach to bioremediation of a hydrocarbon-contaminated site in an Arctic environment. The majority of microorganisms in the terrestrial subsurface, particularly those involved in 'heavy oil' formation, reservoir souring or biofouling remain largely uncharacterised (Handelsman, 2004). There is evidence though that these processes are biologically catalysed, including stable isotopic composition of hydrocarbons in oil formations (Pallasser, 2000; Sun et al., 2005), the absence of biodegraded oil from reservoirs warmer than 80°C (Head et al., 2003) or negligible biofouling in the absence of biofilms (Dobretsov et al., 2009; Lewandowski and Beyenal, 2008), and all clearly suggest an important role for microorganisms in the deep biosphere in general and oilfield systems in particular. While the presence of sulphate-reducing bacteria in oilfields was first observed in the early twentieth century (Bastin, 1926), it was only through careful experiments with isolates from oil systems or contaminated environments that unequivocal evidence for hydrocarbon biodegradation under anaerobic conditions was provided (for a review, see Widdel et al., 2006). Work with pure cultures and microbial enrichments also led to the elucidation of the biochemistry of anaerobic aliphatic and aromatic hydrocarbon degradation and the identification of central metabolites and genes involved in the process, e.g. (Callaghan et al., 2008; Griebler et al., 2003; Kropp et al., 2000). This information could then be extrapolated to the environment to monitor degradation processes and determine if in situ microbial populations possessed the potential for contaminant bioremediation, e.g. Parisi et al. (2009). While other methods have also been developed to monitor natural attenuation of hydrocarbons (Meckenstock et al., 2004), we are

  15. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  16. Microarray-based IgE detection in tears of patients with vernal keratoconjunctivitis.

    Science.gov (United States)

    Leonardi, Andrea; Borghesan, Franco; Faggian, Diego; Plebani, Mario

    2015-11-01

    A specific allergen sensitization can be demonstrated in approximately half of the vernal keratoconjunctivitis (VKC) patients by conventional allergic tests. The measurement of specific IgE in tears using a multiplex allergen microarray may offer advantages to identify local sensitization to a specific allergen. In spring-summer 2011, serum and tears samples were collected from 10 active VKC patients (three females, seven males) and 10 age-matched normal subjects. Skin prick test, symptoms score and full ophthalmological examination were performed. Specific serum and tear IgE were assayed using ImmunoCAP ISAC, a microarray containing 103 components derived from 47 allergens. Normal subjects resulted negative for the presence of specific IgE both in serum and in tears. Of the 10 VKC patients, six resulted positive to specific IgE in serum and/or tears. In three of these six patients, specific IgE was found positive only in tears. Cross-reactivity between specific markers was found in three patients. Grass, tree, mites, animal but also food allergen-specific IgE were found in tears. Conjunctival provocation test performed out of season confirmed the specific local conjunctival reactivity. Multiple specific IgE measurements with single protein allergens using a microarray technique in tear samples are a useful, simple and non-invasive diagnostic tool. ImmunoCAP ISAC detects allergen sensitization at component level and adds important information by defining both cross- and co-sensitization to a large variety of allergen molecules. The presence of specific IgE only in tears of VKC patients reinforces the concept of possible local sensitization. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Genome rearrangements detected by SNP microarrays in individuals with intellectual disability referred with possible Williams syndrome.

    Directory of Open Access Journals (Sweden)

    Ariel M Pani

    2010-08-01

    Full Text Available Intellectual disability (ID affects 2-3% of the population and may occur with or without multiple congenital anomalies (MCA or other medical conditions. Established genetic syndromes and visible chromosome abnormalities account for a substantial percentage of ID diagnoses, although for approximately 50% the molecular etiology is unknown. Individuals with features suggestive of various syndromes but lacking their associated genetic anomalies pose a formidable clinical challenge. With the advent of microarray techniques, submicroscopic genome alterations not associated with known syndromes are emerging as a significant cause of ID and MCA.High-density SNP microarrays were used to determine genome wide copy number in 42 individuals: 7 with confirmed alterations in the WS region but atypical clinical phenotypes, 31 with ID and/or MCA, and 4 controls. One individual from the first group had the most telomeric gene in the WS critical region deleted along with 2 Mb of flanking sequence. A second person had the classic WS deletion and a rearrangement on chromosome 5p within the Cri du Chat syndrome (OMIM:123450 region. Six individuals from the ID/MCA group had large rearrangements (3 deletions, 3 duplications, one of whom had a large inversion associated with a deletion that was not detected by the SNP arrays.Combining SNP microarray analyses and qPCR allowed us to clone and sequence 21 deletion breakpoints in individuals with atypical deletions in the WS region and/or ID or MCA. Comparison of these breakpoints to databases of genomic variation revealed that 52% occurred in regions harboring structural variants in the general population. For two probands the genomic alterations were flanked by segmental duplications, which frequently mediate recurrent genome rearrangements; these may represent new genomic disorders. While SNP arrays and related technologies can identify potentially pathogenic deletions and duplications, obtaining sequence information

  18. DNA microarray data and contextual analysis of correlation graphs

    Directory of Open Access Journals (Sweden)

    Hingamp Pascal

    2003-04-01

    Full Text Available Abstract Background DNA microarrays are used to produce large sets of expression measurements from which specific biological information is sought. Their analysis requires efficient and reliable algorithms for dimensional reduction, classification and annotation. Results We study networks of co-expressed genes obtained from DNA microarray experiments. The mathematical concept of curvature on graphs is used to group genes or samples into clusters to which relevant gene or sample annotations are automatically assigned. Application to publicly available yeast and human lymphoma data demonstrates the reliability of the method in spite of its simplicity, especially with respect to the small number of parameters involved. Conclusions We provide a method for automatically determining relevant gene clusters among the many genes monitored with microarrays. The automatic annotations and the graphical interface improve the readability of the data. A C++ implementation, called Trixy, is available from http://tagc.univ-mrs.fr/bioinformatics/trixy.html.

  19. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  20. A Versatile Microarray Platform for Capturing Rare Cells

    Science.gov (United States)

    Brinkmann, Falko; Hirtz, Michael; Haller, Anna; Gorges, Tobias M.; Vellekoop, Michael J.; Riethdorf, Sabine; Müller, Volkmar; Pantel, Klaus; Fuchs, Harald

    2015-10-01

    Analyses of rare events occurring at extremely low frequencies in body fluids are still challenging. We established a versatile microarray-based platform able to capture single target cells from large background populations. As use case we chose the challenging application of detecting circulating tumor cells (CTCs) - about one cell in a billion normal blood cells. After incubation with an antibody cocktail, targeted cells are extracted on a microarray in a microfluidic chip. The accessibility of our platform allows for subsequent recovery of targets for further analysis. The microarray facilitates exclusion of false positive capture events by co-localization allowing for detection without fluorescent labelling. Analyzing blood samples from cancer patients with our platform reached and partly outreached gold standard performance, demonstrating feasibility for clinical application. Clinical researchers free choice of antibody cocktail without need for altered chip manufacturing or incubation protocol, allows virtual arbitrary targeting of capture species and therefore wide spread applications in biomedical sciences.

  1. Expanding the substantial interactome of NEMO using protein microarrays.

    LENUS (Irish Health Repository)

    Fenner, Beau J

    2010-01-01

    Signal transduction by the NF-kappaB pathway is a key regulator of a host of cellular responses to extracellular and intracellular messages. The NEMO adaptor protein lies at the top of this pathway and serves as a molecular conduit, connecting signals transmitted from upstream sensors to the downstream NF-kappaB transcription factor and subsequent gene activation. The position of NEMO within this pathway makes it an attractive target from which to search for new proteins that link NF-kappaB signaling to additional pathways and upstream effectors. In this work, we have used protein microarrays to identify novel NEMO interactors. A total of 112 protein interactors were identified, with the most statistically significant hit being the canonical NEMO interactor IKKbeta, with IKKalpha also being identified. Of the novel interactors, more than 30% were kinases, while at least 25% were involved in signal transduction. Binding of NEMO to several interactors, including CALB1, CDK2, SAG, SENP2 and SYT1, was confirmed using GST pulldown assays and coimmunoprecipitation, validating the initial screening approach. Overexpression of CALB1, CDK2 and SAG was found to stimulate transcriptional activation by NF-kappaB, while SYT1 overexpression repressed TNFalpha-dependent NF-kappaB transcriptional activation in human embryonic kidney cells. Corresponding with this finding, RNA silencing of CDK2, SAG and SENP2 reduced NF-kappaB transcriptional activation, supporting a positive role for these proteins in the NF-kappaB pathway. The identification of a host of new NEMO interactors opens up new research opportunities to improve understanding of this essential cell signaling pathway.

  2. Immunohistochemical analysis of breast tissue microarray images using contextual classifiers

    Directory of Open Access Journals (Sweden)

    Stephen J McKenna

    2013-01-01

    Full Text Available Background: Tissue microarrays (TMAs are an important tool in translational research for examining multiple cancers for molecular and protein markers. Automatic immunohistochemical (IHC scoring of breast TMA images remains a challenging problem. Methods: A two-stage approach that involves localization of regions of invasive and in-situ carcinoma followed by ordinal IHC scoring of nuclei in these regions is proposed. The localization stage classifies locations on a grid as tumor or non-tumor based on local image features. These classifications are then refined using an auto-context algorithm called spin-context. Spin-context uses a series of classifiers to integrate image feature information with spatial context information in the form of estimated class probabilities. This is achieved in a rotationally-invariant manner. The second stage estimates ordinal IHC scores in terms of the strength of staining and the proportion of nuclei stained. These estimates take the form of posterior probabilities, enabling images with uncertain scores to be referred for pathologist review. Results: The method was validated against manual pathologist scoring on two nuclear markers, progesterone receptor (PR and estrogen receptor (ER. Errors for PR data were consistently lower than those achieved with ER data. Scoring was in terms of estimated proportion of cells that were positively stained (scored on an ordinal scale of 0-6 and perceived strength of staining (scored on an ordinal scale of 0-3. Average absolute differences between predicted scores and pathologist-assigned scores were 0.74 for proportion of cells and 0.35 for strength of staining (PR. Conclusions: The use of context information via spin-context improved the precision and recall of tumor localization. The combination of the spin-context localization method with the automated scoring method resulted in reduced IHC scoring errors.

  3. Approximate Matching of Hierarchial Data

    DEFF Research Database (Denmark)

    Augsten, Nikolaus

    -grams of a tree are all its subtrees of a particular shape. Intuitively, two trees are similar if they have many pq-grams in common. The pq-gram distance is an efficient and effective approximation of the tree edit distance. We analyze the properties of the pq-gram distance and compare it with the tree edit...

  4. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  5. All-Norm Approximation Algorithms

    NARCIS (Netherlands)

    Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik

    2002-01-01

    A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation

  6. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  7. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  8. Approximate reasoning in decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, M M; Sanchez, E

    1982-01-01

    The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.

  9. Pythagorean Approximations and Continued Fractions

    Science.gov (United States)

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  10. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  11. AMDA: an R package for the automated microarray data analysis

    Directory of Open Access Journals (Sweden)

    Foti Maria

    2006-07-01

    Full Text Available Abstract Background Microarrays are routinely used to assess mRNA transcript levels on a genome-wide scale. Large amount of microarray datasets are now available in several databases, and new experiments are constantly being performed. In spite of this fact, few and limited tools exist for quickly and easily analyzing the results. Microarray analysis can be challenging for researchers without the necessary training and it can be time-consuming for service providers with many users. Results To address these problems we have developed an automated microarray data analysis (AMDA software, which provides scientists with an easy and integrated system for the analysis of Affymetrix microarray experiments. AMDA is free and it is available as an R package. It is based on the Bioconductor project that provides a number of powerful bioinformatics and microarray analysis tools. This automated pipeline integrates different functions available in the R and Bioconductor projects with newly developed functions. AMDA covers all of the steps, performing a full data analysis, including image analysis, quality controls, normalization, selection of differentially expressed genes, clustering, correspondence analysis and functional evaluation. Finally a LaTEX document is dynamically generated depending on the performed analysis steps. The generated report contains comments and analysis results as well as the references to several files for a deeper investigation. Conclusion AMDA is freely available as an R package under the GPL license. The package as well as an example analysis report can be downloaded in the Services/Bioinformatics section of the Genopolis http://www.genopolis.it/

  12. Dynamical cluster approximation plus semiclassical approximation study for a Mott insulator and d-wave pairing

    Science.gov (United States)

    Kim, SungKun; Lee, Hunpyo

    2017-06-01

    Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.

  13. Label and Label-Free Detection Techniques for Protein Microarrays

    Directory of Open Access Journals (Sweden)

    Amir Syahir

    2015-04-01

    Full Text Available Protein microarray technology has gone through numerous innovative developments in recent decades. In this review, we focus on the development of protein detection methods embedded in the technology. Early microarrays utilized useful chromophores and versatile biochemical techniques dominated by high-throughput illumination. Recently, the realization of label-free techniques has been greatly advanced by the combination of knowledge in material sciences, computational design and nanofabrication. These rapidly advancing techniques aim to provide data without the intervention of label molecules. Here, we present a brief overview of this remarkable innovation from the perspectives of label and label-free techniques in transducing nano‑biological events.

  14. Advanced Data Mining of Leukemia Cells Micro-Arrays

    OpenAIRE

    Richard S. Segall; Ryan M. Pierce

    2009-01-01

    This paper provides continuation and extensions of previous research by Segall and Pierce (2009a) that discussed data mining for micro-array databases of Leukemia cells for primarily self-organized maps (SOM). As Segall and Pierce (2009a) and Segall and Pierce (2009b) the results of applying data mining are shown and discussed for the data categories of microarray databases of HL60, Jurkat, NB4 and U937 Leukemia cells that are also described in this article. First, a background section is pro...

  15. Fabrication of Biomolecule Microarrays for Cell Immobilization Using Automated Microcontact Printing.

    Science.gov (United States)

    Foncy, Julie; Estève, Aurore; Degache, Amélie; Colin, Camille; Cau, Jean Christophe; Malaquin, Laurent; Vieu, Christophe; Trévisiol, Emmanuelle

    2018-01-01

    Biomolecule microarrays are generally produced by conventional microarrayer, i.e., by contact or inkjet printing. Microcontact printing represents an alternative way of deposition of biomolecules on solid supports but even if various biomolecules have been successfully microcontact printed, the production of biomolecule microarrays in routine by microcontact printing remains a challenging task and needs an effective, fast, robust, and low-cost automation process. Here, we describe the production of biomolecule microarrays composed of extracellular matrix protein for the fabrication of cell microarrays by using an automated microcontact printing device. Large scale cell microarrays can be reproducibly obtained by this method.

  16. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    Science.gov (United States)

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  17. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...

  18. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  19. Approximated solutions to Born-Infeld dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  20. The Hartree-Fock seniority approximation

    International Nuclear Information System (INIS)

    Gomez, J.M.G.; Prieto, C.

    1986-01-01

    A new self-consistent method is used to take into account the mean-field and the pairing correlations in nuclei at the same time. We call it the Hartree-Fock seniority approximation, because the long-range and short-range correlations are treated in the frameworks of Hartree-Fock theory and the seniority scheme. The method is developed in detail for a minimum-seniority variational wave function in the coordinate representation for an effective interaction of the Skyrme type. An advantage of the present approach over the Hartree-Fock-Bogoliubov theory is the exact conservation of angular momentum and particle number. Furthermore, the computational effort required in the Hartree-Fock seniority approximation is similar to that ofthe pure Hartree-Fock picture. Some numerical calculations for Ca isotopes are presented. (orig.)

  1. Analytical Ballistic Trajectories with Approximately Linear Drag

    Directory of Open Access Journals (Sweden)

    Giliam J. P. de Carpentier

    2014-01-01

    Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.

  2. Approximated solutions to Born-Infeld dynamics

    International Nuclear Information System (INIS)

    Ferraro, Rafael; Nigro, Mauro

    2016-01-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  3. Sensitivity and fidelity of DNA microarray improved with integration of Amplified Differential Gene Expression (ADGE

    Directory of Open Access Journals (Sweden)

    Ile Kristina E

    2003-07-01

    Full Text Available Abstract Background The ADGE technique is a method designed to magnify the ratios of gene expression before detection. It improves the detection sensitivity to small change of gene expression and requires small amount of starting material. However, the throughput of ADGE is low. We integrated ADGE with DNA microarray (ADGE microarray and compared it with regular microarray. Results When ADGE was integrated with DNA microarray, a quantitative relationship of a power function between detected and input ratios was found. Because of ratio magnification, ADGE microarray was better able to detect small changes in gene expression in a drug resistant model cell line system. The PCR amplification of templates and efficient labeling reduced the requirement of starting material to as little as 125 ng of total RNA for one slide hybridization and enhanced the signal intensity. Integration of ratio magnification, template amplification and efficient labeling in ADGE microarray reduced artifacts in microarray data and improved detection fidelity. The results of ADGE microarray were less variable and more reproducible than those of regular microarray. A gene expression profile generated with ADGE microarray characterized the drug resistant phenotype, particularly with reference to glutathione, proliferation and kinase pathways. Conclusion ADGE microarray magnified the ratios of differential gene expression in a power function, improved the detection sensitivity and fidelity and reduced the requirement for starting material while maintaining high throughput. ADGE microarray generated a more informative expression pattern than regular microarray.

  4. Exploring matrix factorization techniques for significant genes identification of Alzheimer’s disease microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Hu Xiaohua

    2011-07-01

    Full Text Available Abstract Background The wide use of high-throughput DNA microarray technology provide an increasingly detailed view of human transcriptome from hundreds to thousands of genes. Although biomedical researchers typically design microarray experiments to explore specific biological contexts, the relationships between genes are hard to identified because they are complex and noisy high-dimensional data and are often hindered by low statistical power. The main challenge now is to extract valuable biological information from the colossal amount of data to gain insight into biological processes and the mechanisms of human disease. To overcome the challenge requires mathematical and computational methods that are versatile enough to capture the underlying biological features and simple enough to be applied efficiently to large datasets. Methods Unsupervised machine learning approaches provide new and efficient analysis of gene expression profiles. In our study, two unsupervised knowledge-based matrix factorization methods, independent component analysis (ICA and nonnegative matrix factorization (NMF are integrated to identify significant genes and related pathways in microarray gene expression dataset of Alzheimer’s disease. The advantage of these two approaches is they can be performed as a biclustering method by which genes and conditions can be clustered simultaneously. Furthermore, they can group genes into different categories for identifying related diagnostic pathways and regulatory networks. The difference between these two method lies in ICA assume statistical independence of the expression modes, while NMF need positivity constrains to generate localized gene expression profiles. Results In our work, we performed FastICA and non-smooth NMF methods on DNA microarray gene expression data of Alzheimer’s disease respectively. The simulation results shows that both of the methods can clearly classify severe AD samples from control samples, and

  5. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  6. The tissue microarray OWL schema: An open-source tool for sharing tissue microarray data

    Directory of Open Access Journals (Sweden)

    Hyunseok P Kang

    2010-01-01

    Full Text Available Background: Tissue microarrays (TMAs are enormously useful tools for translational research, but incompatibilities in database systems between various researchers and institutions prevent the efficient sharing of data that could help realize their full potential. Resource Description Framework (RDF provides a flexible method to represent knowledge in triples, which take the form Subject- Predicate-Object. All data resources are described using Uniform Resource Identifiers (URIs, which are global in scope. We present an OWL (Web Ontology Language schema that expands upon the TMA data exchange specification to address this issue and assist in data sharing and integration. Methods: A minimal OWL schema was designed containing only concepts specific to TMA experiments. More general data elements were incorporated from predefined ontologies such as the NCI thesaurus. URIs were assigned using the Linked Data format. Results: We present examples of files utilizing the schema and conversion of XML data (similar to the TMA DES to OWL. Conclusion: By utilizing predefined ontologies and global unique identifiers, this OWL schema provides a solution to the limitations of XML, which represents concepts defined in a localized setting. This will help increase the utilization of tissue resources, facilitating collaborative translational research efforts.

  7. Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray

    Directory of Open Access Journals (Sweden)

    Lan Shu

    2008-07-01

    Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLE’s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.

  8. The microarray detecting six fruit-tree viruses

    Czech Academy of Sciences Publication Activity Database

    Lenz, Ondřej; Petrzik, Karel; Špak, Josef

    2009-01-01

    Roč. 148, July (2009), s. 27 ISSN 1866-590X. [International Conference on Virus and other Graft Transmissible Diseases of Fruit Crops /21./. 05.07.2009-10.07.2009, Neustadt] R&D Projects: GA MŠk OC 853.001 Institutional research plan: CEZ:AV0Z50510513 Keywords : microarray * detection * virus Subject RIV: EE - Microbiology, Virology

  9. GenePublisher: automated analysis of DNA microarray data

    DEFF Research Database (Denmark)

    Knudsen, Steen; Workman, Christopher; Sicheritz-Ponten, T.

    2003-01-01

    GenePublisher, a system for automatic analysis of data from DNA microarray experiments, has been implemented with a web interface at http://www.cbs.dtu.dk/services/GenePublisher. Raw data are uploaded to the server together with aspecification of the data. The server performs normalization...

  10. Towards a programmable magnetic bead microarray in a microfluidic channel

    DEFF Research Database (Denmark)

    Smistrup, Kristian; Bruus, Henrik; Hansen, Mikkel Fougt

    2007-01-01

    to use larger currents and obtain forces of longer range than from thin current lines at a given power limit. Guiding of magnetic beads in the hybrid magnetic separator and the construction of a programmable microarray of magnetic beads in the microfluidic channel by hydrodynamic focusing is presented....

  11. Comparison of Comparative Genomic Hybridization Technologies across Microarray Platforms

    Science.gov (United States)

    In the 2007 Association of Biomolecular Resource Facilities (ABRF) Microarray Research Group (MARG) project, we analyzed HL-60 DNA with five platforms: Agilent, Affymetrix 500K, Affymetrix U133 Plus 2.0, Illumina, and RPCI 19K BAC arrays. Copy number variation (CNV) was analyzed ...

  12. Microarrays: Molecular allergology and nanotechnology for personalised medicine (II).

    Science.gov (United States)

    Lucas, J M

    2010-01-01

    Progress in nanotechnology and DNA recombination techniques have produced tools for the diagnosis and investigation of allergy at molecular level. The most advanced examples of such progress are the microarray techniques, which have been expanded not only in research in the field of proteomics but also in application to the clinical setting. Microarrays of allergic components offer results relating to hundreds of allergenic components in a single test, and using a small amount of serum which can be obtained from capillary blood. The availability of new molecules will allow the development of panels including new allergenic components and sources, which will require evaluation for clinical use. Their application opens the door to component-based diagnosis, to the holistic perception of sensitisation as represented by molecular allergy, and to patient-centred medical practice by allowing great diagnostic accuracy and the definition of individualised immunotherapy for each patient. The present article reviews the application of allergenic component microarrays to allergology for diagnosis, management in the form of specific immunotherapy, and epidemiological studies. A review is also made of the use of protein and gene microarray techniques in basic research and in allergological diseases. Lastly, an evaluation is made of the challenges we face in introducing such techniques to clinical practice, and of the future perspectives of this new technology. Copyright 2010 SEICAP. Published by Elsevier Espana. All rights reserved.

  13. Broad spectrum microarray for fingerprint-based bacterial species identification

    Directory of Open Access Journals (Sweden)

    Frey Jürg E

    2010-02-01

    Full Text Available Abstract Background Microarrays are powerful tools for DNA-based molecular diagnostics and identification of pathogens. Most target a limited range of organisms and are based on only one or a very few genes for specific identification. Such microarrays are limited to organisms for which specific probes are available, and often have difficulty discriminating closely related taxa. We have developed an alternative broad-spectrum microarray that employs hybridisation fingerprints generated by high-density anonymous markers distributed over the entire genome for identification based on comparison to a reference database. Results A high-density microarray carrying 95,000 unique 13-mer probes was designed. Optimized methods were developed to deliver reproducible hybridisation patterns that enabled confident discrimination of bacteria at the species, subspecies, and strain levels. High correlation coefficients were achieved between replicates. A sub-selection of 12,071 probes, determined by ANOVA and class prediction analysis, enabled the discrimination of all samples in our panel. Mismatch probe hybridisation was observed but was found to have no effect on the discriminatory capacity of our system. Conclusions These results indicate the potential of our genome chip for reliable identification of a wide range of bacterial taxa at the subspecies level without laborious prior sequencing and probe design. With its high resolution capacity, our proof-of-principle chip demonstrates great potential as a tool for molecular diagnostics of broad taxonomic groups.

  14. Exploiting fluorescence for multiplex immunoassays on protein microarrays

    International Nuclear Information System (INIS)

    Herbáth, Melinda; Balogh, Andrea; Matkó, János; Papp, Krisztián; Prechl, József

    2014-01-01

    Protein microarray technology is becoming the method of choice for identifying protein interaction partners, detecting specific proteins, carbohydrates and lipids, or for characterizing protein interactions and serum antibodies in a massively parallel manner. Availability of the well-established instrumentation of DNA arrays and development of new fluorescent detection instruments promoted the spread of this technique. Fluorescent detection has the advantage of high sensitivity, specificity, simplicity and wide dynamic range required by most measurements. Fluorescence through specifically designed probes and an increasing variety of detection modes offers an excellent tool for such microarray platforms. Measuring for example the level of antibodies, their isotypes and/or antigen specificity simultaneously can offer more complex and comprehensive information about the investigated biological phenomenon, especially if we take into consideration that hundreds of samples can be measured in a single assay. Not only body fluids, but also cell lysates, extracted cellular components, and intact living cells can be analyzed on protein arrays for monitoring functional responses to printed samples on the surface. As a rapidly evolving area, protein microarray technology offers a great bulk of information and new depth of knowledge. These are the features that endow protein arrays with wide applicability and robust sample analyzing capability. On the whole, protein arrays are emerging new tools not just in proteomics, but glycomics, lipidomics, and are also important for immunological research. In this review we attempt to summarize the technical aspects of planar fluorescent microarray technology along with the description of its main immunological applications. (topical review)

  15. Development of DNA Microarrays for Metabolic Pathway and Bioprocess Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Gregory Stephanopoulos

    2004-07-31

    Transcriptional profiling experiments utilizing DNA microarrays to study the intracellular accumulation of PHB in Synechocystis has proved difficult in large part because strains that show significant differences in PHB which would justify global analysis of gene expression have not been isolated.

  16. SNP typing on the NanoChip electronic microarray

    DEFF Research Database (Denmark)

    Børsting, Claus; Sanchez Sanchez, Juan Jose; Morling, Niels

    2005-01-01

    We describe a single nucleotide polymorphism (SNP) typing protocol developed for the NanoChip electronic microarray. The NanoChip array consists of 100 electrodes covered by a thin hydrogel layer containing streptavidin. An electric currency can be applied to one, several, or all electrodes...

  17. Application of Microarray technology in research and diagnostics

    DEFF Research Database (Denmark)

    Helweg-Larsen, Rehannah Borup

    The overall purpose of this thesis is to evaluate the use of microarray analysis to investigate the transcriptome of human cancers and human follicular cells and define the correlation between expression of human genes and specific cancer types as well as the developmental competence of the oocyte...

  18. Bacterial identification and subtyping using DNA microarray and DNA sequencing.

    Science.gov (United States)

    Al-Khaldi, Sufian F; Mossoba, Magdi M; Allard, Marc M; Lienau, E Kurt; Brown, Eric D

    2012-01-01

    The era of fast and accurate discovery of biological sequence motifs in prokaryotic and eukaryotic cells is here. The co-evolution of direct genome sequencing and DNA microarray strategies not only will identify, isotype, and serotype pathogenic bacteria, but also it will aid in the discovery of new gene functions by detecting gene expressions in different diseases and environmental conditions. Microarray bacterial identification has made great advances in working with pure and mixed bacterial samples. The technological advances have moved beyond bacterial gene expression to include bacterial identification and isotyping. Application of new tools such as mid-infrared chemical imaging improves detection of hybridization in DNA microarrays. The research in this field is promising and future work will reveal the potential of infrared technology in bacterial identification. On the other hand, DNA sequencing by using 454 pyrosequencing is so cost effective that the promise of $1,000 per bacterial genome sequence is becoming a reality. Pyrosequencing technology is a simple to use technique that can produce accurate and quantitative analysis of DNA sequences with a great speed. The deposition of massive amounts of bacterial genomic information in databanks is creating fingerprint phylogenetic analysis that will ultimately replace several technologies such as Pulsed Field Gel Electrophoresis. In this chapter, we will review (1) the use of DNA microarray using fluorescence and infrared imaging detection for identification of pathogenic bacteria, and (2) use of pyrosequencing in DNA cluster analysis to fingerprint bacterial phylogenetic trees.

  19. Exploring Lactobacillus plantarum genome diversity by using microarrays

    NARCIS (Netherlands)

    Molenaar, D.; Bringel, F.; Schuren, F.H.; Vos, de W.M.; Siezen, R.J.; Kleerebezem, M.

    2005-01-01

    Lactobacillus plantarum is a versatile and flexible species that is encountered in a variety of niches and can utilize a broad range of fermentable carbon sources. To assess if this versatility is linked to a variable gene pool, microarrays containing a subset of small genomic fragments of L.

  20. See what you eat--broad GMO screening with microarrays.

    Science.gov (United States)

    von Götz, Franz

    2010-03-01

    Despite the controversy of whether genetically modified organisms (GMOs) are beneficial or harmful for humans, animals, and/or ecosystems, the number of cultivated GMOs is increasing every year. Many countries and federations have implemented safety and surveillance systems for GMOs. Potent testing technologies need to be developed and implemented to monitor the increasing number of GMOs. First, these GMO tests need to be comprehensive, i.e., should detect all, or at least the most important, GMOs on the market. This type of GMO screening requires a high degree of parallel tests or multiplexing. To date, DNA microarrays have the highest number of multiplexing capabilities when nucleic acids are analyzed. This trend article focuses on the evolution of DNA microarrays for GMO testing. Over the last 7 years, combinations of multiplex PCR detection and microarray detection have been developed to qualitatively assess the presence of GMOs. One example is the commercially available DualChip GMO (Eppendorf, Germany; http://www.eppendorf-biochip.com), which is the only GMO screening system successfully validated in a multicenter study. With use of innovative amplification techniques, promising steps have recently been taken to make GMO detection with microarrays quantitative.

  1. Microarray-Based Identification of Transcription Factor Target Genes

    NARCIS (Netherlands)

    Gorte, M.; Horstman, A.; Page, R.B.; Heidstra, R.; Stromberg, A.; Boutilier, K.A.

    2011-01-01

    Microarray analysis is widely used to identify transcriptional changes associated with genetic perturbation or signaling events. Here we describe its application in the identification of plant transcription factor target genes with emphasis on the design of suitable DNA constructs for controlling TF

  2. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Microarray-based RNA profiling of breast cancer

    DEFF Research Database (Denmark)

    Larsen, Martin J; Thomassen, Mads; Tan, Qihua

    2014-01-01

    analyzed the same 234 breast cancers on two different microarray platforms. One dataset contained known batch-effects associated with the fabrication procedure used. The aim was to assess the significance of correcting for systematic batch-effects when integrating data from different platforms. We here...

  4. Microarray analysis of the gene expression profile in triethylene ...

    African Journals Online (AJOL)

    Microarray analysis of the gene expression profile in triethylene glycol dimethacrylate-treated human dental pulp cells. ... Conclusions: Our results suggest that TEGDMA can change the many functions of hDPCs through large changes in gene expression levels and complex interactions with different signaling pathways.

  5. Characterization of pancreatic transcription factor Pdx-1 binding sites using promoter microarray and serial analysis of chromatin occupancy

    OpenAIRE

    Keller, David M; McWeeney, Shannon; Arsenlis, Athanasios; Drouin, Jacques; Wright, Christopher V E; Wang, Haiyan; Wollheim, Claes; White, Peter; Kaestner, Klaus H; Goodman, Richard H

    2007-01-01

    The homeobox transcription factor Pdx-1 is necessary for pancreas organogenesis and beta cell function, however, most Pdx-1-regulated genes are unknown. To further the understanding of Pdx-1 in beta cell biology, we have characterized its genomic targets in NIT-1 cells, a mouse insulinoma cell line. To identify novel targets, we developed a microarray that includes traditional promoters as well as non-coding conserved elements, micro-RNAs, and elements identified through an unbiased approach ...

  6. Prognostic meta-signature of breast cancer developed by two-stage mixture modeling of microarray data

    Directory of Open Access Journals (Sweden)

    Ghosh Debashis

    2004-12-01

    Full Text Available Abstract Background An increasing number of studies have profiled tumor specimens using distinct microarray platforms and analysis techniques. With the accumulating amount of microarray data, one of the most intriguing yet challenging tasks is to develop robust statistical models to integrate the findings. Results By applying a two-stage Bayesian mixture modeling strategy, we were able to assimilate and analyze four independent microarray studies to derive an inter-study validated "meta-signature" associated with breast cancer prognosis. Combining multiple studies (n = 305 samples on a common probability scale, we developed a 90-gene meta-signature, which strongly associated with survival in breast cancer patients. Given the set of independent studies using different microarray platforms which included spotted cDNAs, Affymetrix GeneChip, and inkjet oligonucleotides, the individually identified classifiers yielded gene sets predictive of survival in each study cohort. The study-specific gene signatures, however, had minimal overlap with each other, and performed poorly in pairwise cross-validation. The meta-signature, on the other hand, accommodated such heterogeneity and achieved comparable or better prognostic performance when compared with the individual signatures. Further by comparing to a global standardization method, the mixture model based data transformation demonstrated superior properties for data integration and provided solid basis for building classifiers at the second stage. Functional annotation revealed that genes involved in cell cycle and signal transduction activities were over-represented in the meta-signature. Conclusion The mixture modeling approach unifies disparate gene expression data on a common probability scale allowing for robust, inter-study validated prognostic signatures to be obtained. With the emerging utility of microarrays for cancer prognosis, it will be important to establish paradigms to meta

  7. Integration of Multiplexed Microfluidic Electrokinetic Concentrators with a Morpholino Microarray via Reversible Surface Bonding for Enhanced DNA Hybridization.

    Science.gov (United States)

    Martins, Diogo; Wei, Xi; Levicky, Rastislav; Song, Yong-Ak

    2016-04-05

    We describe a microfluidic concentration device to accelerate the surface hybridization reaction between DNA and morpholinos (MOs) for enhanced detection. The microfluidic concentrator comprises a single polydimethylsiloxane (PDMS) microchannel onto which an ion-selective layer of conductive polymer poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) ( PSS) was directly printed and then reversibly surface bonded onto a morpholino microarray for hybridization. Using this electrokinetic trapping concentrator, we could achieve a maximum concentration factor of ∼800 for DNA and a limit of detection of 10 nM within 15 min. In terms of the detection speed, it enabled faster hybridization by around 10-fold when compared to conventional diffusion-based hybridization. A significant advantage of our approach is that the fabrication of the microfluidic concentrator is completely decoupled from the microarray; by eliminating the need to deposit an ion-selective layer on the microarray surface prior to device integration, interfacing between both modules, the PDMS chip for electrokinetic concentration and the substrate for DNA sensing are easier and applicable to any microarray platform. Furthermore, this fabrication strategy facilitates a multiplexing of concentrators. We have demonstrated the proof-of-concept for multiplexing by building a device with 5 parallel concentrators connected to a single inlet/outlet and applying it to parallel concentration and hybridization. Such device yielded similar concentration and hybridization efficiency compared to that of a single-channel device without adding any complexity to the fabrication and setup. These results demonstrate that our concentrator concept can be applied to the development of a highly multiplexed concentrator-enhanced microarray detection system for either genetic analysis or other diagnostic assays.

  8. Feature selection and classification of MAQC-II breast cancer and multiple myeloma microarray gene expression data.

    Directory of Open Access Journals (Sweden)

    Qingzhong Liu

    Full Text Available Microarray data has a high dimension of variables but available datasets usually have only a small number of samples, thereby making the study of such datasets interesting and challenging. In the task of analyzing microarray data for the purpose of, e.g., predicting gene-disease association, feature selection is very important because it provides a way to handle the high dimensionality by exploiting information redundancy induced by associations among genetic markers. Judicious feature selection in microarray data analysis can result in significant reduction of cost while maintaining or improving the classification or prediction accuracy of learning machines that are employed to sort out the datasets. In this paper, we propose a gene selection method called Recursive Feature Addition (RFA, which combines supervised learning and statistical similarity measures. We compare our method with the following gene selection methods: Support Vector Machine Recursive Feature Elimination (SVMRFE, Leave-One-Out Calculation Sequential Forward Selection (LOOCSFS, Gradient based Leave-one-out Gene Selection (GLGS. To evaluate the performance of these gene selection methods, we employ several popular learning classifiers on the MicroArray Quality Control phase II on predictive modeling (MAQC-II breast cancer dataset and the MAQC-II multiple myeloma dataset. Experimental results show that gene selection is strictly paired with learning classifier. Overall, our approach outperforms other compared methods. The biological functional analysis based on the MAQC-II breast cancer dataset convinced us to apply our method for phenotype prediction. Additionally, learning classifiers also play important roles in the classification of microarray data and our experimental results indicate that the Nearest Mean Scale Classifier (NMSC is a good choice due to its prediction reliability and its stability across the three performance measurements: Testing accuracy, MCC values, and

  9. Gene expression profiling in gill tissues of White spot syndrome virus infected black tiger shrimp Penaeus monodon by DNA microarray.

    Science.gov (United States)

    Shekhar, M S; Gomathi, A; Gopikrishna, G; Ponniah, A G

    2015-06-01

    White spot syndrome virus (WSSV) continues to be the most devastating viral pathogen infecting penaeid shrimp the world over. The genome of WSSV has been deciphered and characterized from three geographical isolates and significant progress has been made in developing various molecular diagnostic methods to detect the virus. However, the information on host immune gene response to WSSV pathogenesis is limited. Microarray analysis was carried out as an approach to analyse the gene expression in black tiger shrimp Penaeus monodon in response to WSSV infection. Gill tissues collected from the WSSV infected shrimp at 6, 24, 48 h and moribund stage were analysed for differential gene expression. Shrimp cDNAs of 40,059 unique sequences were considered for designing the microarray chip. The Cy3-labeled cRNA derived from healthy and WSSV-infected shrimp was subjected to hybridization with all the DNA spots in the microarray which revealed 8,633 and 11,147 as up- and down-regulated genes respectively at different time intervals post infection. The altered expression of these numerous genes represented diverse functions such as immune response, osmoregulation, apoptosis, nucleic acid binding, energy and metabolism, signal transduction, stress response and molting. The changes in gene expression profiles observed by microarray analysis provides molecular insights and framework of genes which are up- and down-regulated at different time intervals during WSSV infection in shrimp. The microarray data was validated by Real Time analysis of four differentially expressed genes involved in apoptosis (translationally controlled tumor protein, inhibitor of apoptosis protein, ubiquitin conjugated enzyme E2 and caspase) for gene expression levels. The role of apoptosis related genes in WSSV infected shrimp is discussed herein.

  10. Comparison of gene coverage of mouse oligonucleotide microarray platforms

    Directory of Open Access Journals (Sweden)

    Medrano Juan F

    2006-03-01

    Full Text Available Abstract Background The increasing use of DNA microarrays for genetical genomics studies generates a need for platforms with complete coverage of the genome. We have compared the effective gene coverage in the mouse genome of different commercial and noncommercial oligonucleotide microarray platforms by performing an in-house gene annotation of probes. We only used information about probes that is available from vendors and followed a process that any researcher may take to find the gene targeted by a given probe. In order to make consistent comparisons between platforms, probes in each microarray were annotated with an Entrez Gene id and the chromosomal position for each gene was obtained from the UCSC Genome Browser Database. Gene coverage was estimated as the percentage of Entrez Genes with a unique position in the UCSC Genome database that is tested by a given microarray platform. Results A MySQL relational database was created to store the mapping information for 25,416 mouse genes and for the probes in five microarray platforms (gene coverage level in parenthesis: Affymetrix430 2.0 (75.6%, ABI Genome Survey (81.24%, Agilent (79.33%, Codelink (78.09%, Sentrix (90.47%; and four array-ready oligosets: Sigma (47.95%, Operon v.3 (69.89%, Operon v.4 (84.03%, and MEEBO (84.03%. The differences in coverage between platforms were highly conserved across chromosomes. Differences in the number of redundant and unspecific probes were also found among arrays. The database can be queried to compare specific genomic regions using a web interface. The software used to create, update and query the database is freely available as a toolbox named ArrayGene. Conclusion The software developed here allows researchers to create updated custom databases by using public or proprietary information on genes for any organisms. ArrayGene allows easy comparisons of gene coverage between microarray platforms for any region of the genome. The comparison presented here

  11. Computational biology of genome expression and regulation--a review of microarray bioinformatics.

    Science.gov (United States)

    Wang, Junbai

    2008-01-01

    Microarray technology is being used widely in various biomedical research areas; the corresponding microarray data analysis is an essential step toward the best utilizing of array technologies. Here we review two components of the microarray data analysis: a low level of microarray data analysis that emphasizes the designing, the quality control, and the preprocessing of microarray experiments, then a high level of microarray data analysis that focuses on the domain-specific microarray applications such as tumor classification, biomarker prediction, analyzing array CGH experiments, and reverse engineering of gene expression networks. Additionally, we will review the recent development of building a predictive model in genome expression and regulation studies. This review may help biologists grasp a basic knowledge of microarray bioinformatics as well as its potential impact on the future evolvement of biomedical research fields.

  12. THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL

    Science.gov (United States)

    Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...

  13. Translating microarray data for diagnostic testing in childhood leukaemia

    International Nuclear Information System (INIS)

    Hoffmann, Katrin; Firth, Martin J; Beesley, Alex H; Klerk, Nicholas H de; Kees, Ursula R

    2006-01-01

    Recent findings from microarray studies have raised the prospect of a standardized diagnostic gene expression platform to enhance accurate diagnosis and risk stratification in paediatric acute lymphoblastic leukaemia (ALL). However, the robustness as well as the format for such a diagnostic test remains to be determined. As a step towards clinical application of these findings, we have systematically analyzed a published ALL microarray data set using Robust Multi-array Analysis (RMA) and Random Forest (RF). We examined published microarray data from 104 ALL patients specimens, that represent six different subgroups defined by cytogenetic features and immunophenotypes. Using the decision-tree based supervised learning algorithm Random Forest (RF), we determined a small set of genes for optimal subgroup distinction and subsequently validated their predictive power in an independent patient cohort. We achieved very high overall ALL subgroup prediction accuracies of about 98%, and were able to verify the robustness of these genes in an independent panel of 68 specimens obtained from a different institution and processed in a different laboratory. Our study established that the selection of discriminating genes is strongly dependent on the analysis method. This may have profound implications for clinical use, particularly when the classifier is reduced to a small set of genes. We have demonstrated that as few as 26 genes yield accurate class prediction and importantly, almost 70% of these genes have not been previously identified as essential for class distinction of the six ALL subgroups. Our finding supports the feasibility of qRT-PCR technology for standardized diagnostic testing in paediatric ALL and should, in conjunction with conventional cytogenetics lead to a more accurate classification of the disease. In addition, we have demonstrated that microarray findings from one study can be confirmed in an independent study, using an entirely independent patient cohort

  14. Microarray analysis in the archaeon Halobacterium salinarum strain R1.

    Directory of Open Access Journals (Sweden)

    Jens Twellmeyer

    Full Text Available BACKGROUND: Phototrophy of the extremely halophilic archaeon Halobacterium salinarum was explored for decades. The research was mainly focused on the expression of bacteriorhodopsin and its functional properties. In contrast, less is known about genome wide transcriptional changes and their impact on the physiological adaptation to phototrophy. The tool of choice to record transcriptional profiles is the DNA microarray technique. However, the technique is still rarely used for transcriptome analysis in archaea. METHODOLOGY/PRINCIPAL FINDINGS: We developed a whole-genome DNA microarray based on our sequence data of the Hbt. salinarum strain R1 genome. The potential of our tool is exemplified by the comparison of cells growing under aerobic and phototrophic conditions, respectively. We processed the raw fluorescence data by several stringent filtering steps and a subsequent MAANOVA analysis. The study revealed a lot of transcriptional differences between the two cell states. We found that the transcriptional changes were relatively weak, though significant. Finally, the DNA microarray data were independently verified by a real-time PCR analysis. CONCLUSION/SIGNIFICANCE: This is the first DNA microarray analysis of Hbt. salinarum cells that were actually grown under phototrophic conditions. By comparing the transcriptomics data with current knowledge we could show that our DNA microarray tool is well applicable for transcriptome analysis in the extremely halophilic archaeon Hbt. salinarum. The reliability of our tool is based on both the high-quality array of DNA probes and the stringent data handling including MAANOVA analysis. Among the regulated genes more than 50% had unknown functions. This underlines the fact that haloarchaeal phototrophy is still far away from being completely understood. Hence, the data recorded in this study will be subject to future systems biology analysis.

  15. Washing scaling of GeneChip microarray expression

    Directory of Open Access Journals (Sweden)

    Krohn Knut

    2010-05-01

    Full Text Available Abstract Background Post-hybridization washing is an essential part of microarray experiments. Both the quality of the experimental washing protocol and adequate consideration of washing in intensity calibration ultimately affect the quality of the expression estimates extracted from the microarray intensities. Results We conducted experiments on GeneChip microarrays with altered protocols for washing, scanning and staining to study the probe-level intensity changes as a function of the number of washing cycles. For calibration and analysis of the intensity data we make use of the 'hook' method which allows intensity contributions due to non-specific and specific hybridization of perfect match (PM and mismatch (MM probes to be disentangled in a sequence specific manner. On average, washing according to the standard protocol removes about 90% of the non-specific background and about 30-50% and less than 10% of the specific targets from the MM and PM, respectively. Analysis of the washing kinetics shows that the signal-to-noise ratio doubles roughly every ten stringent washing cycles. Washing can be characterized by time-dependent rate constants which reflect the heterogeneous character of target binding to microarray probes. We propose an empirical washing function which estimates the survival of probe bound targets. It depends on the intensity contribution due to specific and non-specific hybridization per probe which can be estimated for each probe using existing methods. The washing function allows probe intensities to be calibrated for the effect of washing. On a relative scale, proper calibration for washing markedly increases expression measures, especially in the limit of small and large values. Conclusions Washing is among the factors which potentially distort expression measures. The proposed first-order correction method allows direct implementation in existing calibration algorithms for microarray data. We provide an experimental

  16. DNA microarray-based PCR ribotyping of Clostridium difficile.

    Science.gov (United States)

    Schneeberg, Alexander; Ehricht, Ralf; Slickers, Peter; Baier, Vico; Neubauer, Heinrich; Zimmermann, Stefan; Rabold, Denise; Lübke-Becker, Antina; Seyboldt, Christian

    2015-02-01

    This study presents a DNA microarray-based assay for fast and simple PCR ribotyping of Clostridium difficile strains. Hybridization probes were designed to query the modularly structured intergenic spacer region (ISR), which is also the template for conventional and PCR ribotyping with subsequent capillary gel electrophoresis (seq-PCR) ribotyping. The probes were derived from sequences available in GenBank as well as from theoretical ISR module combinations. A database of reference hybridization patterns was set up from a collection of 142 well-characterized C. difficile isolates representing 48 seq-PCR ribotypes. The reference hybridization patterns calculated by the arithmetic mean were compared using a similarity matrix analysis. The 48 investigated seq-PCR ribotypes revealed 27 array profiles that were clearly distinguishable. The most frequent human-pathogenic ribotypes 001, 014/020, 027, and 078/126 were discriminated by the microarray. C. difficile strains related to 078/126 (033, 045/FLI01, 078, 126, 126/FLI01, 413, 413/FLI01, 598, 620, 652, and 660) and 014/020 (014, 020, and 449) showed similar hybridization patterns, confirming their genetic relatedness, which was previously reported. A panel of 50 C. difficile field isolates was tested by seq-PCR ribotyping and the DNA microarray-based assay in parallel. Taking into account that the current version of the microarray does not discriminate some closely related seq-PCR ribotypes, all isolates were typed correctly. Moreover, seq-PCR ribotypes without reference profiles available in the database (ribotype 009 and 5 new types) were correctly recognized as new ribotypes, confirming the performance and expansion potential of the microarray. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  17. Recommendations for the use of microarrays in prenatal diagnosis.

    Science.gov (United States)

    Suela, Javier; López-Expósito, Isabel; Querejeta, María Eugenia; Martorell, Rosa; Cuatrecasas, Esther; Armengol, Lluis; Antolín, Eugenia; Domínguez Garrido, Elena; Trujillo-Tiebas, María José; Rosell, Jordi; García Planells, Javier; Cigudosa, Juan Cruz

    2017-04-07

    Microarray technology, recently implemented in international prenatal diagnosis systems, has become one of the main techniques in this field in terms of detection rate and objectivity of the results. This guideline attempts to provide background information on this technology, including technical and diagnostic aspects to be considered. Specifically, this guideline defines: the different prenatal sample types to be used, as well as their characteristics (chorionic villi samples, amniotic fluid, fetal cord blood or miscarriage tissue material); variant reporting policies (including variants of uncertain significance) to be considered in informed consents and prenatal microarray reports; microarray limitations inherent to the technique and which must be taken into account when recommending microarray testing for diagnosis; a detailed clinical algorithm recommending the use of microarray testing and its introduction into routine clinical practice within the context of other genetic tests, including pregnancies in families with a genetic history or specific syndrome suspicion, first trimester increased nuchal translucency or second trimester heart malformation and ultrasound findings not related to a known or specific syndrome. This guideline has been coordinated by the Spanish Association for Prenatal Diagnosis (AEDP, «Asociación Española de Diagnóstico Prenatal»), the Spanish Human Genetics Association (AEGH, «Asociación Española de Genética Humana») and the Spanish Society of Clinical Genetics and Dysmorphology (SEGCyD, «Sociedad Española de Genética Clínica y Dismorfología»). Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  18. Integrated olfactory receptor and microarray gene expression databases

    Directory of Open Access Journals (Sweden)

    Crasto Chiquito J

    2007-06-01

    Full Text Available Abstract Background Gene expression patterns of olfactory receptors (ORs are an important component of the signal encoding mechanism in the olfactory system since they determine the interactions between odorant ligands and sensory neurons. We have developed the Olfactory Receptor Microarray Database (ORMD to house OR gene expression data. ORMD is integrated with the Olfactory Receptor Database (ORDB, which is a key repository of OR gene information. Both databases aim to aid experimental research related to olfaction. Description ORMD is a Web-accessible database that provides a secure data repository for OR microarray experiments. It contains both publicly available and private data; accessing the latter requires authenticated login. The ORMD is designed to allow users to not only deposit gene expression data but also manage their projects/experiments. For example, contributors can choose whether to make their datasets public. For each experiment, users can download the raw data files and view and export the gene expression data. For each OR gene being probed in a microarray experiment, a hyperlink to that gene in ORDB provides access to genomic and proteomic information related to the corresponding olfactory receptor. Individual ORs archived in ORDB are also linked to ORMD, allowing users access to the related microarray gene expression data. Conclusion ORMD serves as a data repository and project management system. It facilitates the study of microarray experiments of gene expression in the olfactory system. In conjunction with ORDB, ORMD integrates gene expression data with the genomic and functional data of ORs, and is thus a useful resource for both olfactory researchers and the public.

  19. Pentaquarks in the Jaffe-Wilczek approximation

    International Nuclear Information System (INIS)

    Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.

    2005-01-01

    The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone-boson-exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV [ru

  20. Pentaquarks in the Jaffe-Wilczek Approximation

    International Nuclear Information System (INIS)

    Narodetskii, I.M.; Simonov, Yu.A.; Trusov, M.A.; Semay, C.; Silvestre-Brac, B.

    2005-01-01

    The masses of uudds-bar, uuddd-bar, and uussd-bar pentaquarks are evaluated in a framework of both the effective Hamiltonian approach to QCD and the spinless Salpeter equation using the Jaffe-Wilczek diquark approximation and the string interaction for the diquark-diquark-antiquark system. The pentaquark masses are found to be in the region above 2 GeV. That indicates that the Goldstone boson exchange effects may play an important role in the light pentaquarks. The same calculations yield the mass of [ud] 2 c-bar pentaquark ∼3250 MeV and [ud] 2 b-bar pentaquark ∼6509 MeV