Shrinkage covariance matrix approach for microarray data
Karjanto, Suryaefiza; Aripin, Rasimah
2013-04-01
Microarray technology was developed for the purpose of monitoring the expression levels of thousands of genes. A microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints including the high cost of producing microarray chips. As a result, the widely used standard covariance estimator is not appropriate for this purpose. One such technique is the Hotelling's T2 statistic which is a multivariate test statistic for comparing means between two groups. It requires that the number of observations (n) exceeds the number of genes (p) in the set but in microarray studies it is common that n Hotelling's T2 statistic with the shrinkage approach is proposed to estimate the covariance matrix for testing differential gene expression. The performance of this approach is then compared with other commonly used multivariate tests using a widely analysed diabetes data set as illustrations. The results across the methods are consistent, implying that this approach provides an alternative to existing techniques.
An imputation approach for oligonucleotide microarrays.
Ming Li
Full Text Available Oligonucleotide microarrays are commonly adopted for detecting and qualifying the abundance of molecules in biological samples. Analysis of microarray data starts with recording and interpreting hybridization signals from CEL images. However, many CEL images may be blemished by noises from various sources, observed as "bright spots", "dark clouds", and "shadowy circles", etc. It is crucial that these image defects are correctly identified and properly processed. Existing approaches mainly focus on detecting defect areas and removing affected intensities. In this article, we propose to use a mixed effect model for imputing the affected intensities. The proposed imputation procedure is a single-array-based approach which does not require any biological replicate or between-array normalization. We further examine its performance by using Affymetrix high-density SNP arrays. The results show that this imputation procedure significantly reduces genotyping error rates. We also discuss the necessary adjustments for its potential extension to other oligonucleotide microarrays, such as gene expression profiling. The R source code for the implementation of approach is freely available upon request.
S-Approximation: A New Approach to Algebraic Approximation
M. R. Hooshmandasl
2014-01-01
Full Text Available We intend to study a new class of algebraic approximations, called S-approximations, and their properties. We have shown that S-approximations can be used for applied problems which cannot be modeled by inclusion based approximations. Also, in this work, we studied a subclass of S-approximations, called Sℳ-approximations, and showed that this subclass preserves most of the properties of inclusion based approximations but is not necessarily inclusionbased. The paper concludes by studying some basic operations on S-approximations and counting the number of S-min functions.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Two heuristic approaches to describe periodicities in genomic microarrays
Jörg Aßmus
2009-09-01
Full Text Available In the first part we discuss the filtering of panels of time series based on singular value decomposition. The discussion is based on an approach where this filtering is used to normalize microarray data. We point out effects on the periodicity and phases for time series panels. In the second part we investigate time dependent periodic panels with different phases. We align the time series in the panel and discuss the periodogram of the aligned time series with the purpose of describing the periodic structure of the panel. The method is quite powerful assuming known phases in the model, but it deteriorates rapidly for noisy data.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
Tchagang, Alain B.; Tewfik, Ahmed H.
2006-12-01
Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
Tewfik Ahmed H
2006-01-01
Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
Gerstgrasser, Matthias; Nicholls, Sarah; Stout, Michael; Smart, Katherine; Powell, Chris; Kypraios, Theodore; Stekel, Dov
2016-06-01
Biolog phenotype microarrays (PMs) enable simultaneous, high throughput analysis of cell cultures in different environments. The output is high-density time-course data showing redox curves (approximating growth) for each experimental condition. The software provided with the Omnilog incubator/reader summarizes each time-course as a single datum, so most of the information is not used. However, the time courses can be extremely varied and often contain detailed qualitative (shape of curve) and quantitative (values of parameters) information. We present a novel, Bayesian approach to estimating parameters from Phenotype Microarray data, fitting growth models using Markov Chain Monte Carlo (MCMC) methods to enable high throughput estimation of important information, including length of lag phase, maximal "growth" rate and maximum output. We find that the Baranyi model for microbial growth is useful for fitting Biolog data. Moreover, we introduce a new growth model that allows for diauxic growth with a lag phase, which is particularly useful where Phenotype Microarrays have been applied to cells grown in complex mixtures of substrates, for example in industrial or biotechnological applications, such as worts in brewing. Our approach provides more useful information from Biolog data than existing, competing methods, and allows for valuable comparisons between data series and across different models.
Experimental Approaches to Microarray Analysis of Tumor Samples
Furge, Laura Lowe; Winter, Michael B.; Meyers, Jacob I.; Furge, Kyle A.
2008-01-01
Comprehensive measurement of gene expression using high-density nucleic acid arrays (i.e. microarrays) has become an important tool for investigating the molecular differences in clinical and research samples. Consequently, inclusion of discussion in biochemistry, molecular biology, or other appropriate courses of microarray technologies has…
Adaptive Control with Approximated Policy Search Approach
Agus Naba
2010-05-01
Full Text Available Most of existing adaptive control schemes are designed to minimize error between plant state and goal state despite the fact that executing actions that are predicted to result in smaller errors only can mislead to non-goal states. We develop an adaptive control scheme that involves manipulating a controller of a general type to improve its performance as measured by an evaluation function. The developed method is closely related to a theory of Reinforcement Learning (RL but imposes a practical assumption made for faster learning. We assume that a value function of RL can be approximated by a function of Euclidean distance from a goal state and an action executed at the state. And, we propose to use it for the gradient search as an evaluation function. Simulation results provided through application of the proposed scheme to a pole-balancing problem using a linear state feedback controller and fuzzy controller verify the scheme’s efficacy.
Gorban, A N
2008-01-01
Principal manifolds are defined as lines or surfaces passing through ``the middle'' of data distribution. Linear principal manifolds (Principal Components Analysis) are routinely used for dimension reduction, noise filtering and data visualization. Recently, methods for constructing non-linear principal manifolds were proposed, including our elastic maps approach which is based on a physical analogy with elastic membranes. We have developed a general geometric framework for constructing ``principal objects'' of various dimensions and topologies with the simplest quadratic form of the smoothness penalty which allows very effective parallel implementations. Our approach is implemented in three programming languages (C++, Java and Delphi) with two graphical user interfaces (VidaExpert http://bioinfo.curie.fr/projects/vidaexpert and ViMiDa http://bioinfo-out.curie.fr/projects/vimida applications). In this paper we overview the method of elastic maps and present in detail one of its major applications: the visuali...
Statistical approaches for the analysis of DNA methylation microarray data.
Siegmund, Kimberly D
2011-06-01
Following the rapid development and adoption in DNA methylation microarray assays, we are now experiencing a growth in the number of statistical tools to analyze the resulting large-scale data sets. As is the case for other microarray applications, biases caused by technical issues are of concern. Some of these issues are old (e.g., two-color dye bias and probe- and array-specific effects), while others are new (e.g., fragment length bias and bisulfite conversion efficiency). Here, I highlight characteristics of DNA methylation that suggest standard statistical tools developed for other data types may not be directly suitable. I then describe the microarray technologies most commonly in use, along with the methods used for preprocessing and obtaining a summary measure. I finish with a section describing downstream analyses of the data, focusing on methods that model percentage DNA methylation as the outcome, and methods for integrating DNA methylation with gene expression or genotype data.
A UNIFIED APPROACH TO CERTAIN PROBLEMS OF BEST LOCAL APPROXIMATION
H.H.Cuenya; M.D.Lorenzo; C.N.Rodriguez
2007-01-01
In this paper we study best local quasi-rational approximation and best local approximation from finite dimensional subspaces of vectorial functions of several variables.Our approach extends and unifies several problems concerning best local multi-point approximation in different norms.
New Approach to Fractal Approximation of Vector-Functions
Konstantin Igudesman
2015-01-01
Full Text Available This paper introduces new approach to approximation of continuous vector-functions and vector sequences by fractal interpolation vector-functions which are multidimensional generalization of fractal interpolation functions. Best values of fractal interpolation vector-functions parameters are found. We give schemes of approximation of some sets of data and consider examples of approximation of smooth curves with different conditions.
DNA Microarray Technologies: A Novel Approach to Geonomic Research
Hinman, R.; Thrall, B.; Wong, K,
2002-01-01
A cDNA microarray allows biologists to examine the expression of thousands of genes simultaneously. Researchers may analyze the complete transcriptional program of an organism in response to specific physiological or developmental conditions. By design, a cDNA microarray is an experiment with many variables and few controls. One question that inevitably arises when working with a cDNA microarray is data reproducibility. How easy is it to confirm mRNA expression patterns? In this paper, a case study involving the treatment of a murine macrophage RAW 264.7 cell line with tumor necrosis factor alpha (TNF) was used to obtain a rough estimate of data reproducibility. Two trials were examined and a list of genes displaying either a > 2-fold or > 4-fold increase in gene expression was compiled. Variations in signal mean ratios between the two slides were observed. We can assume that erring in reproducibility may be compensated by greater inductive levels of similar genes. Steps taken to obtain results included serum starvation of cells before treatment, tests of mRNA for quality/consistency, and data normalization.
A Hybrid Reduction Approach for Enhancing Cancer Classification of Microarray Data
Abeer M. Mahmoud
2014-10-01
Full Text Available This paper presents a novel hybrid machine learning (MLreduction approach to enhance cancer classification accuracy of microarray data based on two ML gene ranking techniques (T-test and Class Separability (CS. The proposed approach is integrated with two ML classifiers; K-nearest neighbor (KNN and support vector machine (SVM; for mining microarray gene expression profiles. Four public cancer microarray databases are used for evaluating the proposed approach and successfully accomplish the mining process. These are Lymphoma, Leukemia SRBCT, and Lung Cancer. The strategy to select genes only from the training samples and totally excluding the testing samples from the classifier building process is utilized for more accurate and validated results. Also, the computational experiments are illustrated in details and comprehensively presented with literature related results. The results showed that the proposed reduction approach reached promising results of the number of genes supplemented to the classifiers as well as the classification accuracy.
Low-complexity PDE-based approach for automatic microarray image processing.
Belean, Bogdan; Terebes, Romulus; Bot, Adrian
2015-02-01
Microarray image processing is known as a valuable tool for gene expression estimation, a crucial step in understanding biological processes within living organisms. Automation and reliability are open subjects in microarray image processing, where grid alignment and spot segmentation are essential processes that can influence the quality of gene expression information. The paper proposes a novel partial differential equation (PDE)-based approach for fully automatic grid alignment in case of microarray images. Our approach can handle image distortions and performs grid alignment using the vertical and horizontal luminance function profiles. These profiles are evolved using a hyperbolic shock filter PDE and then refined using the autocorrelation function. The results are compared with the ones delivered by state-of-the-art approaches for grid alignment in terms of accuracy and computational complexity. Using the same PDE formalism and curve fitting, automatic spot segmentation is achieved and visual results are presented. Considering microarray images with different spots layouts, reliable results in terms of accuracy and reduced computational complexity are achieved, compared with existing software platforms and state-of-the-art methods for microarray image processing.
An Approach to Structural Approximation Analysis by Artificial Neural Networks
陆金桂; 周济; 王浩; 陈新度; 余俊; 肖世德
1994-01-01
This paper theoretically proves that a three-layer neural network can be applied to implementing exactly the function between the stresses and displacements and the design variables of any elastic structure based on the Kolmogorov’s mapping neural network existence theorem. A new approach to the structural approximation analysis with the global characteristic based on artificial neural networks is presented. The computer simulation experiments made by this paper show that the new approach is effective.
GREEN‘S FUNCTION APPROACH IN APPROXIMATE CONTROLLABILITY PROBLEMS
Avetisyan A. S.
2016-06-01
Full Text Available A mathematical approach based on Green‘s function approach allowing to construct controls providing approximate controllability is suggested in the present paper. Representing the solution of governing system via Green’s formula and substituting it in prescribed terminal conditions, we obtain control functions providing approximate controllability of the system under study in explicit form. Choosing appropriate controls, we can provide required accuracy of approximation for prescribed conditions. Examples illustrating the procedure are described. Particularly, infinite string, controlled by a concentrated force, semi-infinite rod heated by a point heat source, finite rod heated from its boundary and parameter optimization for electrical circuit are considered. Results of computsations are brought.
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
A Simple Geometric Approach to Approximating the Gini Coefficient
Kasper, Hirschel; Golden, John
2008-01-01
The author shows how a quick approximation of the Lorenz curve's Gini coefficient can be calculated empirically using numerical data presented in cumulative income quintiles. When the technique here was used to estimate 621 income quintile/Gini coefficient observations from the Deninger and Squire/World Bank data set, this approach performed…
Gerstgrasser, Matthias; Nicholls, Sarah; Stout, Michael; Smart, Katherine; Powell, Chris; Kypraios, Theodore; Stekel, Dov J.
2016-01-01
Biolog phenotype microarrays enable simultaneous, high throughput analysis of cell cultures in different environments. The output is high-density time-course data showing redox curves (approximating growth) for each experimental condition. The software provided with the Omnilog incubator/reader summarizes each time-course as a single datum, so most of the information is not used. However, the time courses can be extremely varied and often contain detailed qualitative (shape of curve) and quan...
Haze of surface random systems: An approximate analytic approach
Simonsen, Ingve; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin
2009-01-01
Approximate analytic expressions for haze (and gloss) of Gaussian randomly rough surfaces for various types of correlation functions are derived within phase-perturbation theory. The approximations depend on the angle of incidence, polarization of the incident light, the surface roughness, $\\sigma$, and the average of the power spectrum taken over a small angular interval about the specular direction. In particular it is demonstrated that haze(gloss) increase(decrease) with $\\sigma/\\lambda$ as $\\exp(-A(\\sigma/\\lambda)^2)$ and decreases(increase) with $a/\\lambda$, where $a$ is the correlation length of the surface roughness, in a way that depends on the specific form of the correlation function being considered. These approximations are compared to what can be obtained from a rigorous Monte Carlo simulation approach, and good agreement is found over large regions of parameter space. Some experimental results for the angular distribution of the transmitted light through polymer films, and their haze, are presen...
A novel parametric approach to mine gene regulatory relationship from microarray datasets
Zhu Yunping
2010-12-01
Full Text Available Abstract Background Microarray has been widely used to measure the gene expression level on the genome scale in the current decade. Many algorithms have been developed to reconstruct gene regulatory networks based on microarray data. Unfortunately, most of these models and algorithms focus on global properties of the expression of genes in regulatory networks. And few of them are able to offer intuitive parameters. We wonder whether some simple but basic characteristics of microarray datasets can be found to identify the potential gene regulatory relationship. Results Based on expression correlation, expression level variation and vectors derived from microarray expression levels, we first introduced several novel parameters to measure the characters of regulating gene pairs. Subsequently, we used the naïve Bayesian network to integrate these features as well as the functional co-annotation between transcription factors and their target genes. Then, based on the character of time-delay from the expression profile, we were able to predict the existence and direction of the regulatory relationship respectively. Conclusions Several novel parameters have been proposed and integrated to identify the regulatory relationship. This new model is proved to be of higher efficacy than that of individual features. It is believed that our parametric approach can serve as a fast approach for regulatory relationship mining.
Yuk Yee Leung
Full Text Available BACKGROUND: Using hybrid approach for gene selection and classification is common as results obtained are generally better than performing the two tasks independently. Yet, for some microarray datasets, both classification accuracy and stability of gene sets obtained still have rooms for improvement. This may be due to the presence of samples with wrong class labels (i.e. outliers. Outlier detection algorithms proposed so far are either not suitable for microarray data, or only solve the outlier detection problem on their own. RESULTS: We tackle the outlier detection problem based on a previously proposed Multiple-Filter-Multiple-Wrapper (MFMW model, which was demonstrated to yield promising results when compared to other hybrid approaches (Leung and Hung, 2010. To incorporate outlier detection and overcome limitations of the existing MFMW model, three new features are introduced in our proposed MFMW-outlier approach: 1 an unbiased external Leave-One-Out Cross-Validation framework is developed to replace internal cross-validation in the previous MFMW model; 2 wrongly labeled samples are identified within the MFMW-outlier model; and 3 a stable set of genes is selected using an L1-norm SVM that removes any redundant genes present. Six binary-class microarray datasets were tested. Comparing with outlier detection studies on the same datasets, MFMW-outlier could detect all the outliers found in the original paper (for which the data was provided for analysis, and the genes selected after outlier removal were proven to have biological relevance. We also compared MFMW-outlier with PRAPIV (Zhang et al., 2006 based on same synthetic datasets. MFMW-outlier gave better average precision and recall values on three different settings. Lastly, artificially flipped microarray datasets were created by removing our detected outliers and flipping some of the remaining samples' labels. Almost all the 'wrong' (artificially flipped samples were detected, suggesting
Archer Kellie J
2008-02-01
Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been
Kong, Xiangrong; Mas, Valeria; Archer, Kellie J
2008-02-26
With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN) to those with normal functioning allograft. The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been reported to be relevant to renal diseases. Further study on the
Hu, Ming; Qin, Zhaohui S.
2009-01-01
In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232
DNA Microarray as Part of a Genomic-Assisted Breeding Approach
Vincze, Éva; Bowra, Steve
2010-01-01
. tissue/pathway specific approaches using an example of focused microarray and how it follows predicted changes during grain development. We describe of an extension of the study to field grown material and conclude that such an approach is able to interpret differences in the gene expression profiles......In the struggle to achieve global food security, crop breeding retains an important role in crop production. A current trend is the diversification of the aims of crop production, to include an increased awareness of aspects and consequences of food quality. The added emphasis on food and feed...... and practical significances, fold changes, validation and possible additional regulatory mechanisms in gene expression. The subject of the fourth section is the applications of DNA microarrays to study of global gene expression during grain filling in monocot crops, especially barley. We compare large arrays vs...
Nonparametric variance estimation in the analysis of microarray data: a measurement error approach.
Carroll, Raymond J; Wang, Yuedong
2008-01-01
This article investigates the effects of measurement error on the estimation of nonparametric variance functions. We show that either ignoring measurement error or direct application of the simulation extrapolation, SIMEX, method leads to inconsistent estimators. Nevertheless, the direct SIMEX method can reduce bias relative to a naive estimator. We further propose a permutation SIMEX method which leads to consistent estimators in theory. The performance of both SIMEX methods depends on approximations to the exact extrapolants. Simulations show that both SIMEX methods perform better than ignoring measurement error. The methodology is illustrated using microarray data from colon cancer patients.
Binder, Hans; Brücker, Jan; Burden, Conrad J
2009-03-05
The problem of inferring accurate quantitative estimates of transcript abundances from gene expression microarray data is addressed. Particular attention is paid to correcting chip-to-chip variations arising mainly as a result of unwanted nonspecific background hybridization to give transcript abundances measured in a common scale. This study verifies and generalizes a model of the mutual dependence between nonspecific background hybridization and the sensitivity of the specific signal using an approach based on the physical chemistry of surface hybridization. We have analyzed GeneChip oligonucleotide microarray data taken from a set of five benchmark experiments including dilution, Latin Square, and "Golden spike" designs. Our analysis concentrates on the important effect of changes in the unwanted nonspecific background inherent in the technology due to changes in total RNA target concentration and/or composition. We find that incremental changes in nonspecific background entail opposite sign incremental changes in the effective specific binding constant. This effect, which we refer to as the "up-down" effect, results from the subtle interplay of competing interactions between the probes and specific and nonspecific targets at the chip surface and in bulk solution. We propose special rules for proper normalization of expression values considering the specifics of the up-down effect. Particularly for normalization one has to level the expression values of invariant expressed probes. Existing heuristic normalization techniques which do not exclude absent probes, level intensities instead of expression values, and/or use low variance criteria for identifying invariant sets of probes lead to biased results. Strengths and pitfalls of selected normalization methods are discussed. We also find that the extent of the up-down effect is modified if RNA targets are replaced by DNA targets, in that microarray sensitivity and specificity are improved via a decrease in
An approximation approach for uncertainty quantification using evidence theory
Bae, Ha-Rok; Grandhi, Ramana V.; Canfield, Robert A
2004-12-01
Over the last two decades, uncertainty quantification (UQ) in engineering systems has been performed by the popular framework of probability theory. However, many scientific and engineering communities realize that there are limitations in using only one framework for quantifying the uncertainty experienced in engineering applications. Recently evidence theory, also called Dempster-Shafer theory, was proposed to handle limited and imprecise data situations as an alternative to the classical probability theory. Adaptation of this theory for large-scale engineering structures is a challenge due to implicit nature of simulations and excessive computational costs. In this work, an approximation approach is developed to improve the practical utility of evidence theory in UQ analysis. The techniques are demonstrated on composite material structures and airframe wing aeroelastic design problem.
Function approximation for learning control : a key sample based approach
Kruif, de Bastiaan Johannes
2004-01-01
Two function approximators are introduced in this thesis for use in learning control. These function approximators identify a relation between input and output based on samples. Two different, but closely related function approximators are introduced: the key sample machine and the recursive key sam
Function approximation for learning control : a key sample based approach
2004-01-01
Two function approximators are introduced in this thesis for use in learning control. These function approximators identify a relation between input and output based on samples. Two different, but closely related function approximators are introduced: the key sample machine and the recursive key sample machine.
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha
2013-02-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.
Cascading A*: a Parallel Approach to Approximate Heuristic Search
Gu, Yan
2014-01-01
In this paper, we proposed a new approximate heuristic search algorithm: Cascading A*, which is a two-phrase algorithm combining A* and IDA* by a new concept "envelope ball". The new algorithm CA* is efficient, able to generate approximate solution and any-time solution, and parallel friendly.
A new approach to spinel ferrites through mean field approximation
Yazdani, A. [Tarbyat Modares University, Tehran P.C 14115-175 (Iran, Islamic Republic of)]. E-mail: yazdania@modares.ac.ir; Jalilian Nosrati, M.R. [Islamic Azad University Central Tehran Branch, Tehran P.C 14168-94351 (Iran, Islamic Republic of); Ghasemi, R. [Islamic Azad University Central Tehran Branch, Tehran P.C 14168-94351 (Iran, Islamic Republic of)
2006-09-15
The magnetic behavior and specification of spinel ferrites regarding exchange interactions is being studied. The strength of interactions has been examined through the cation substitution with application of mean field approximation of exchange interaction J{sub ij} . Two correlation and approximation parameters have been defined: correlation length R {sub c} in super-exchange and the magnetic effect of ion on the electron fluctuation J {sub 0}.
An approximate approach to quantum mechanical study of biomacromolecules
Chen, Xihua
method/basis-set levels of the quantum chemical calculation on the MFCC-downhill simplex optimization are also discussed. Finally, the MFCC-downhill simplex method is tested, as a general multiatomic case study, on a molecular system of cyclo-AAGAGG·H 2O to optimize the binding structure of water molecule to the fixed cyclohexapeptide. The MFCC-downhill simplex optimization results in good agreement with the crystal structure. The MFCC-downhill simplex method should be applicable to optimize the structures of ligands that bind to biomacromolecules such as proteins and DNAs. In Chapter 4, we propose a new approximate method for efficient calculation of biomacromolecular electronic properties, using a Density Matrix (DM) scheme which is integrated with the MFCC approach. In this MFCC-DM method, a biomacro-molecule such as a protein is partitioned by an MFCC scheme into properly capped fragments and concaps whose density matrices are calculated by conventional ab initio methods. These sub-system density matrices are then assembled to construct the full system density matrix which is finally employed to calculate the electronic energy, dipole moment, electronic density, electrostatic potential, etc., of the protein using Hartree-Fock or Density Functional Theory methods. By this MFCC-DM method, the self-consistent field (SCF) procedure for solving the full Hamiltonian problem is circumvented. Two implementations of this approach, MFCC-SDM and MFCC-GDM, are discussed. Systematic numerical studies are carried out on a series of extended polyglycines CH3CO-(GLY) n-NHCH3 (n=3-25) and excellent results are obtained. In Chapter 5, we present an improvement of MFCC-DM method and introduce a pairwise interaction correction (PIC) with which the MFCC-DM method is applicable to study a real-world protein with short-range structural complexity such as hydrogen bonding and close contact. In this MFCC-DM-PIC method, a protein molecule is partitioned into properly capped fragments and
Pathak, Gopal P; Gärtner, Wolfgang
2010-01-01
A DNA microarray-based approach is described for screening metagenomic libraries for the presence of selected genes. The protocol is exemplified for the identification of flavin-binding, blue-light-sensitive biological photoreceptors (BL), based on a homology search in already sequenced, annotated genomes. The microarray carried 149 different 54-mer oligonucleotides, derived from consensus sequences of BL photoreceptors. The array could readily identify targets carrying 4% sequence mismatch, and allowed unambiguous identification of a positive cosmid clone of as little as 10 ng against a background of 25 μg of cosmid DNA. The protocol allows screening up to 1,200 library clones in concentrations as low as ca. 20 ng, each with a ca. 40 kb insert size readily in a single batch. Calibration and control conditions are outlined. This protocol, when applied to the thermophilic fraction of a soil sample, yielded the identification and functional characterization of a novel, BL-encoding gene that showed a 58% similarity to a known, BL-encoding gene from Kineococcus radiotolerans SRS30216 (similarity values refer to the respective LOV domains).
Optimal angle reduction - a behavioral approach to linear system approximation
Roorda, Berend; Fuhrmann, P.A.
2001-01-01
We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is analyz
Gray Joanna
2010-11-01
Full Text Available Abstract Background We report an attempt to extend the previously successful approach of combining SNP (single nucleotide polymorphism microarrays and DNA pooling (SNP-MaP employing high-density microarrays. Whereas earlier studies employed a range of Affymetrix SNP microarrays comprising from 10 K to 500 K SNPs, this most recent investigation used the 6.0 chip which displays 906,600 SNP probes and 946,000 probes for the interrogation of CNVs (copy number variations. The genotyping assay using the Affymetrix SNP 6.0 array is highly demanding on sample quality due to the small feature size, low redundancy, and lack of mismatch probes. Findings In the first study published so far using this microarray on pooled DNA, we found that pooled cheek swab DNA could not accurately predict real allele frequencies of the samples that comprised the pools. In contrast, the allele frequency estimates using blood DNA pools were reasonable, although inferior compared to those obtained with previously employed Affymetrix microarrays. However, it might be possible to improve performance by developing improved analysis methods. Conclusions Despite the decreasing costs of genome-wide individual genotyping, the pooling approach may have applications in very large-scale case-control association studies. In such cases, our study suggests that high-quality DNA preparations and lower density platforms should be preferred.
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan
2011-05-14
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.
Distinguishing bacterial pathogens of potato using a genome-wide microarray approach.
Aittamaa, M; Somervuo, P; Pirhonen, M; Mattinen, L; Nissinen, R; Auvinen, P; Valkonen, J P T
2008-09-01
A set of 9676 probes was designed for the most harmful bacterial pathogens of potato and tested in a microarray format. Gene-specific probes could be designed for all genes of Pectobacterium atrosepticum, c. 50% of the genes of Streptomyces scabies and c. 30% of the genes of Clavibacter michiganensis ssp. sepedonicus utilizing the whole-genome sequence information available. For Streptomyces turgidiscabies, 226 probes were designed according to the sequences of a pathogenicity island containing important virulence genes. In addition, probes were designed for the virulence-associated nip (necrosis-inducing protein) genes of P. atrosepticum, P. carotovorum and Dickeya dadantii and for the intergenic spacer (IGS) sequences of the 16S-23S rRNA gene region. Ralstonia solanacearum was not included in the study, because it is a quarantine organism and is not presently found in Finland, but a few probes were also designed for this species. The probes contained on average 40 target-specific nucleotides and were synthesized on the array in situ, organized as eight sub-arrays with an identical set of probes which could be used for hybridization with different samples. All bacteria were readily distinguished using a single channel system for signal detection. Nearly all of the c. 1000 probes designed for C. michiganensis ssp. sepedonicus, c. 50% and 40% of the c. 4000 probes designed for the genes of S. scabies and P. atrosepticum, respectively, and over 100 probes for S. turgidiscabies showed significant signals only with the respective species. P. atrosepticum, P. carotovorum and Dickeya strains were all detected with 110 common probes. By contrast, the strains of these species were found to differ in their signal profiles. Probes targeting the IGS region and nip genes could be used to place strains of Dickeya to two groups, which correlated with differences in virulence. Taken together, the approach of using a custom-designed, genome-wide microarray provided a robust means
Verification of Duration Systems Using an Approximation Approach
Riadh Robbana
2003-01-01
We consider the verification problem of invariance properties for timed systemsmodeled by (extended) Timed Graphs with duration variables. This problem is in general caseundecidable. Nevertheless we give in this paper a technique extending a given system into anotherone containing the initial computations as well as additional ones. Then we define a digitiza-tion technique allowing the translation from the continuous case to the discrete one. Using thisdigitization, we show that to each real computation in the initial system corresponds a discretecomputation in the extended system. Then, we show that the extended system corresponds to avery close approximation of the initial one, allowing per consequent, a good analysis of invarianceproperties of the initial system.
An improved multidisciplinary feasible method using DACE approximation approach
Su Zijian; Xiao Renbin; Zhong Yifang
2005-01-01
Multidisciplinary feasible method (MDF) is conventional method to multidisciplinary optimization (MDO) and well-understood by users. It reduces the dimensions of the multidisciplinary optimization problem by using the design variables as independent optimization variables. However, at each iteration of the conventional optimization procedure,multidisciplinary analysis (MDA) is numerously performed that results in extreme expense and low optimization efficiency. The intrinsic weakness of MDF is due to the times that it loop fixed-point iterations in MDA, which drive us to improve MDF by building inexpensive approximations as surrogates for expensive MDA. An simple example is presented to demonstrate the usefulness of the improved MDF. Results show that a significant reduction in the number of multidisciplinary analysis required for optimization is obtained as compared with original MDF and the efficiency of optimization is increased.
Approximate Decoding Approaches for Network Coded Correlated Data
Park, Hyunggon; Frossard, Pascal
2011-01-01
This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad-hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth bottlenecks. We first show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the perfo...
Plasmon Pole Approximation within the GW Lanczos approach
Gosselin, Vincent; Rousseau, Bruno; Cote, Michel
2015-03-01
The well-known DFT gap problem is adressed by computational methods that are more ressource intensive both in terms of memory and time requirements. Amongst other methods, the GW approach has known great success in the field of electronic structure calculations. Addressing the main bottlenecks impeding one shot GW calculations, a sum over all conduction states and an integral over all frequencies must be carried. Within an implementation of the GW method based on the Lanczos algorithm, the sum over conduction states is treated with a Sternheimer method whereas the frequency integral is carried out numerically. In this talk, I will present an implementation of a plasmon-pole model combined with the Lanczos method that allows a treatement of this integral that is computationally favorable.
T. Karthikeyan
2014-05-01
Full Text Available The aim of this research study is based on efficient gene selection and classification of microarray data analysis using hybrid machine learning algorithms. The beginning of microarray technology has enabled the researchers to quickly measure the position of thousands of genes expressed in an organic/biological tissue samples in a solitary experiment. One of the important applications of this microarray technology is to classify the tissue samples using their gene expression representation, identify numerous type of cancer. Cancer is a group of diseases in which a set of cells shows uncontrolled growth, instance that interrupts upon and destroys nearby tissues and spreading to other locations in the body via lymph or blood. Cancer has becomes a one of the major important disease in current scenario. DNA microarrays turn out to be an effectual tool utilized in molecular biology and cancer diagnosis. Microarrays can be measured to establish the relative quantity of mRNAs in two or additional organic/biological tissue samples for thousands/several thousands of genes at the same time. As the superiority of this technique become exactly analysis/identifying the suitable assessment of microarray data in various open issues. In the field of medical sciences multi-category cancer classification play a major important role to classify the cancer types according to the gene expression. The need of the cancer classification has been become indispensible, because the numbers of cancer victims are increasing steadily identified by recent years. To perform this proposed a combination of Integer-Coded Genetic Algorithm (ICGA and Artificial Bee Colony algorithm (ABC, coupled with an Adaptive Extreme Learning Machine (AELM, is used for gene selection and cancer classification. ICGA is used with ABC based AELM classifier to chose an optimal set of genes which results in an efficient hybrid algorithm that can handle sparse data and sample imbalance. The
Assessing the reliability of amplified RNA used in microarrays: a DUMB table approach.
Bearden, Edward D; Simpson, Pippa M; Peterson, Charlotte A; Beggs, Marjorie L
2006-01-01
A certain minimal amount of RNA from biological samples is necessary to perform a microarray experiment with suitable replication. In some cases, the amount of RNA available is insufficient, necessitating RNA amplification prior to target synthesis. However, there is some uncertainty about the reliability of targets that have been generated from amplified RNA, because of nonlinearity and preferential amplification. This current work develops a straightforward strategy to assess the reliability of microarray data obtained from amplified RNA. The tabular method we developed, which utilises a Down-Up-Missing-Below (DUMB) classification scheme, shows that microarrays generated with amplified RNA targets are reliable within constraints. There was an increase in false negatives because of the need for increased filtering. Furthermore, this analysis method is generic and can be broadly applied to evaluate all microarray data. A copy of the Microsoft Excel spreadsheet is available upon request from Edward Bearden.
Kalograiaki, Ioanna; Euba, Begoña; Proverbio, Davide; Campanero-Rhodes, María A; Aastrup, Teodor; Garmendia, Junkal; Solís, Dolores
2016-06-01
Recognition of bacterial surface epitopes by host receptors plays an important role in the infectious process and is intimately associated with bacterial virulence. Delineation of bacteria-host interactions commonly relies on the detection of binding events between purified bacteria- and host-target molecules. In this work, we describe a combined microarray and quartz crystal microbalance (QCM) approach for the analysis of carbohydrate-mediated interactions directly on the bacterial surface, thus preserving the native environment of the bacterial targets. Nontypeable Haemophilus influenzae (NTHi) was selected as a model pathogenic species not displaying a polysaccharide capsule or O-antigen-containing lipopolysaccharide, a trait commonly found in several important respiratory pathogens. Here, we demonstrate the usefulness of NTHi microarrays for exploring the presence of carbohydrate structures on the bacterial surface. Furthermore, the microarray approach is shown to be efficient for detecting strain-selective binding of three innate immune lectins, namely, surfactant protein D, human galectin-8, and Siglec-14, to different NTHi clinical isolates. In parallel, QCM bacteria-chips were developed for the analysis of lectin-binding kinetics and affinity. This novel QCM approach involves capture of NTHi on lectin-derivatized chips followed by formaldehyde fixation, rendering the bacteria an integrated part of the sensor chip, and subsequent binding assays with label-free lectins. The binding parameters obtained for selected NTHi-lectin pairs provide further insights into the interactions occurring at the bacterial surface.
A visual analytics approach for understanding biclustering results from microarray data
Quintales Luis
2008-05-01
Full Text Available Abstract Background Microarray analysis is an important area of bioinformatics. In the last few years, biclustering has become one of the most popular methods for classifying data from microarrays. Although biclustering can be used in any kind of classification problem, nowadays it is mostly used for microarray data classification. A large number of biclustering algorithms have been developed over the years, however little effort has been devoted to the representation of the results. Results We present an interactive framework that helps to infer differences or similarities between biclustering results, to unravel trends and to highlight robust groupings of genes and conditions. These linked representations of biclusters can complement biological analysis and reduce the time spent by specialists on interpreting the results. Within the framework, besides other standard representations, a visualization technique is presented which is based on a force-directed graph where biclusters are represented as flexible overlapped groups of genes and conditions. This microarray analysis framework (BicOverlapper, is available at http://vis.usal.es/bicoverlapper Conclusion The main visualization technique, tested with different biclustering results on a real dataset, allows researchers to extract interesting features of the biclustering results, especially the highlighting of overlapping zones that usually represent robust groups of genes and/or conditions. The visual analytics methodology will permit biology experts to study biclustering results without inspecting an overwhelming number of biclusters individually.
A Combinatory Approach for Selecting Prognostic Genes in Microarray Studies of Tumour Survivals
Qihua Tan
2009-01-01
Full Text Available Different from significant gene expression analysis which looks for genes that are differentially regulated, feature selection in the microarray-based prognostic gene expression analysis aims at finding a subset of marker genes that are not only differentially expressed but also informative for prediction. Unfortunately feature selection in literature of microarray study is predominated by the simple heuristic univariate gene filter paradigm that selects differentially expressed genes according to their statistical significances. We introduce a combinatory feature selection strategy that integrates differential gene expression analysis with the Gram-Schmidt process to identify prognostic genes that are both statistically significant and highly informative for predicting tumour survival outcomes. Empirical application to leukemia and ovarian cancer survival data through-within- and cross-study validations shows that the feature space can be largely reduced while achieving improved testing performances.
Wu Qishi; Zhu Mengxia
2008-01-01
Abstract Background The advance in high-throughput genomic technologies including microarrays has demonstrated the potential of generating a tremendous amount of gene expression data for the entire genome. Deciphering transcriptional networks that convey information on intracluster correlations and intercluster connections of genes is a crucial analysis task in the post-sequence era. Most of the existing analysis methods for genome-wide gene expression profiles consist of several steps that o...
Park, Sungjin; Gildersleeve, Jeffrey C; Blixt, Klas Ola;
2012-01-01
In the last decade, carbohydrate microarrays have been core technologies for analyzing carbohydrate-mediated recognition events in a high-throughput fashion. A number of methods have been exploited for immobilizing glycans on the solid surface in a microarray format. This microarray-based technol...
Timmusk Tõnis
2009-09-01
Full Text Available Abstract Background Alterations in brain-derived neurotrophic factor (BDNF gene expression contribute to serious pathologies such as depression, epilepsy, cancer, Alzheimer's, Huntington and Parkinson's disease. Therefore, exploring the mechanisms of BDNF regulation represents a great clinical importance. Studying BDNF expression remains difficult due to its multiple neural activity-dependent and tissue-specific promoters. Thus, microarray data could provide insight into the regulation of this complex gene. Conventional microarray co-expression analysis is usually carried out by merging the datasets or by confirming the re-occurrence of significant correlations across datasets. However, co-expression patterns can be different under various conditions that are represented by subsets in a dataset. Therefore, assessing co-expression by measuring correlation coefficient across merged samples of a dataset or by merging datasets might not capture all correlation patterns. Results In our study, we performed meta-coexpression analysis of publicly available microarray data using BDNF as a "guide-gene" introducing a "subset" approach. The key steps of the analysis included: dividing datasets into subsets with biologically meaningful sample content (e.g. tissue, gender or disease state subsets; analyzing co-expression with the BDNF gene in each subset separately; and confirming co- expression links across subsets. Finally, we analyzed conservation in co-expression with BDNF between human, mouse and rat, and sought for conserved over-represented TFBSs in BDNF and BDNF-correlated genes. Correlated genes discovered in this study regulate nervous system development, and are associated with various types of cancer and neurological disorders. Also, several transcription factor identified here have been reported to regulate BDNF expression in vitro and in vivo. Conclusion The study demonstrates the potential of the "subset" approach in co-expression conservation
Merging Belief Propagation and the Mean Field Approximation: A Free Energy Approach
Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro
2013-01-01
We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al. We show that the message passing fixed-point equations obtained with this combination...... correspond to stationary points of a constrained region-based free energy approximation. Moreover, we present a convergent implementation of these message passing fixed-point equations provided that the underlying factor graph fulfills certain technical conditions. In addition, we show how to include hard...
Emmanuel Y Dotsey
Full Text Available In recent years, high throughput discovery of human recombinant monoclonal antibodies (mAbs has been applied to greatly advance our understanding of the specificity, and functional activity of antibodies against HIV. Thousands of antibodies have been generated and screened in functional neutralization assays, and antibodies associated with cross-strain neutralization and passive protection in primates, have been identified. To facilitate this type of discovery, a high throughput-screening tool is needed to accurately classify mAbs, and their antigen targets. In this study, we analyzed and evaluated a prototype microarray chip comprised of the HIV-1 recombinant proteins gp140, gp120, gp41, and several membrane proximal external region peptides. The protein microarray analysis of 11 HIV-1 envelope-specific mAbs revealed diverse binding affinities and specificities across clades. Half maximal effective concentrations, generated by our chip analysis, correlated significantly (P<0.0001 with concentrations from ELISA binding measurements. Polyclonal immune responses in plasma samples from HIV-1 infected subjects exhibited different binding patterns, and reactivity against printed proteins. Examining the totality of the specificity of the humoral response in this way reveals the exquisite diversity, and specificity of the humoral response to HIV.
Conceptual "Heat-Driven" approach to the synthesis of DNA oligonucleotides on microarrays.
Grajkowski, A; Cieślak, J; Chmielewski, M K; Marchán, V; Phillips, L R; Wilk, A; Beaucage, S L
2003-12-01
The discovery of deoxyribonucleoside cyclic N-acylphosphoramidites, a novel class of phosphoramidite monomers for solid-phase oligonucleotide synthesis, has led to the development of a number of phosphate protecting groups that can be cleaved from DNA oligonucleotides under thermolytic neutral conditions. These include the 2-(N-formyl-N-methyl)aminoethyl, 4-oxopentyl, 3-(N-tert-butyl)carboxamido-1-propyl, 3-(2-pyridyl)-1-propyl, 2-[N-methyl-N-(2-pyridyl)]aminoethyl, and 4-methythiobutyl groups. When used for 5'-hydroxyl protection of nucleosides, the analogous 1-phenyl-2-[N-methyl-N-(2-pyridyl)]aminoethyloxycarbonyl group exhibited excellent thermolytic properties, which may permit an iterative "heat-driven" synthesis of DNA oligonucleotides on microarrays. In this regard, progress has been made toward the use of deoxyribonucleoside cyclic N-acylphosphoramidites in solid-phase oligonucleotide syntheses without nucleobase protection. Given that deoxyribonucleoside cyclic N-acylphosphoramidites produce oligonucleotides with heat-sensitive phosphate protecting groups, blocking the 5'-hydroxyl of these monomers with, for example, the thermolabile 1-phenyl-2-[N-methyl-N-(2-pyridyl)]aminoethyloxycarbonyl group may provide a convenient thermo-controlled method for the synthesis of oligonucleotides on microarrays.
Chinnaiyan Arul M
2007-09-01
Full Text Available Abstract Background With the explosion in data generated using microarray technology by different investigators working on similar experiments, it is of interest to combine results across multiple studies. Results In this article, we describe a general probabilistic framework for combining high-throughput genomic data from several related microarray experiments using mixture models. A key feature of the model is the use of latent variables that represent quantities that can be combined across diverse platforms. We consider two methods for estimation of an index termed the probability of expression (POE. The first, reported in previous work by the authors, involves Markov Chain Monte Carlo (MCMC techniques. The second method is a faster algorithm based on the expectation-maximization (EM algorithm. The methods are illustrated with application to a meta-analysis of datasets for metastatic cancer. Conclusion The statistical methods described in the paper are available as an R package, metaArray 1.8.1, which is at Bioconductor, whose URL is http://www.bioconductor.org/.
A scalar optimization approach for averaged Hausdorff approximations of the Pareto front
Schütze, Oliver; Domínguez-Medina, Christian; Cruz-Cortés, Nareli; Gerardo de la Fraga, Luis; Sun, Jian-Qiao; Toscano, Gregorio; Landa, Ricardo
2016-09-01
This article presents a novel method to compute averaged Hausdorff (?) approximations of the Pareto fronts of multi-objective optimization problems. The underlying idea is to utilize directly the scalar optimization problem that is induced by the ? performance indicator. This method can be viewed as a certain set based scalarization approach and can be addressed both by mathematical programming techniques and evolutionary algorithms (EAs). In this work, the focus is on the latter where a first single objective EA for such ? approximations is proposed. Finally, the strength of the novel approach is demonstrated on some bi-objective benchmark problems with different shapes of the Pareto front.
Jin, Jinshuang, E-mail: jsjin@hznu.edu.cn [Department of Physics, Hangzhou Normal University, Hangzhou 310036 (China); Li, Jun [Department of Physics, Hangzhou Normal University, Hangzhou 310036 (China); College of Physics and Electronic Engineering, Dezhou University, Dezhou 253023 (China); Liu, Yu [State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China); Li, Xin-Qi, E-mail: lixinqi@bnu.edu.cn [State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China); Department of Physics, Beijing Normal University, Beijing 100875 (China); Department of Chemistry, Hong Kong University of Science and Technology, Kowloon (Hong Kong); Yan, YiJing, E-mail: yyan@ust.hk [Department of Chemistry, Hong Kong University of Science and Technology, Kowloon (Hong Kong); Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2014-06-28
Beyond the second-order Born approximation, we propose an improved master equation approach to quantum transport under self-consistent Born approximation. The basic idea is to replace the free Green's function in the tunneling self-energy diagram by an effective reduced propagator under the Born approximation. This simple modification has remarkable consequences. It not only recovers the exact results for quantum transport through noninteracting systems under arbitrary voltages, but also predicts the challenging nonequilibrium Kondo effect. Compared to the nonequilibrium Green's function technique that formulates the calculation of specific correlation functions, the master equation approach contains richer dynamical information to allow more efficient studies for such as the shot noise and full counting statistics.
Jin, Jinshuang; Li, Jun; Liu, Yu; Li, Xin-Qi; Yan, YiJing
2014-06-28
Beyond the second-order Born approximation, we propose an improved master equation approach to quantum transport under self-consistent Born approximation. The basic idea is to replace the free Green's function in the tunneling self-energy diagram by an effective reduced propagator under the Born approximation. This simple modification has remarkable consequences. It not only recovers the exact results for quantum transport through noninteracting systems under arbitrary voltages, but also predicts the challenging nonequilibrium Kondo effect. Compared to the nonequilibrium Green's function technique that formulates the calculation of specific correlation functions, the master equation approach contains richer dynamical information to allow more efficient studies for such as the shot noise and full counting statistics.
Series-based approximate approach of optimal tracking control for nonlinear systems with time-delay
Gongyou Tang; Mingqu Fan
2008-01-01
The optimal output tracking control (OTC) problem for nonlinear systems with time-delay is considered.Using a series-based approx-imate approach,the original OTC problem is transformed into iteration solving linear two-point boundary value problems without time-delay.The OTC law obtained consists of analytical linear feedback and feedforward terms and a nonlinear compensation term with an infinite series of the adjoint vectors.By truncating a finite sum of the adjoint vector series,an approximate optimal tracking control law is obtained.A reduced-order reference input observer is constructed to make the feedforward term physically realizable.Simulation exam-pies are used to test the validity of the series-based approximate approach.
May, Beverly A.; And Others
1981-01-01
Teaching ideas related to the instruction of decimal division as the opposite of multiplication, an approach to approximating logarithms that help reveal their properties, and the simple creation of algebraic equations with radical expressions for use as exercises and test questions are presented. (MP)
FEDOROVA,A.; ZEITLIN,M.; PARSA,Z.
2000-03-31
In this paper the authors present applications of methods from wavelet analysis to polynomial approximations for a number of accelerator physics problems. According to a variational approach in the general case they have the solution as a multiresolution (multiscales) expansion on the base of compactly supported wavelet basis. They give an extension of their results to the cases of periodic orbital particle motion and arbitrary variable coefficients. Then they consider more flexible variational method which is based on a biorthogonal wavelet approach. Also they consider a different variational approach, which is applied to each scale.
A study on the quintic nonlinear beam vibrations using asymptotic approximate approaches
Sedighi, Hamid M.; Shirazi, Kourosh H.; Attarzadeh, Mohammad A.
2013-10-01
This paper intends to promote the application of modern analytical approaches to the governing equation of transversely vibrating quintic nonlinear beams. Four new studied methods are Stiffness analytical approximation method, Homotopy Perturbation Method with an Auxiliary Term, Max-Min Approach (MMA) and Iteration Perturbation Method (IPM). The powerful analytical approaches are used to obtain the nonlinear frequency-amplitude relationship for dynamic behavior of vibrating beams with quintic nonlinearity. It is demonstrated that the first terms in series expansions of all methods are sufficient to obtain a highly accurate solution. Finally, a numerical example is conducted to verify the integrity of the asymptotic methods.
Approximate-master-equation approach for the Kinouchi-Copelli neural model on networks
Wang, Chong-Yang; Wu, Zhi-Xi; Chen, Michael Z. Q.
2017-01-01
In this work, we use the approximate-master-equation approach to study the dynamics of the Kinouchi-Copelli neural model on various networks. By categorizing each neuron in terms of its state and also the states of its neighbors, we are able to uncover how the coupled system evolves with respective to time by directly solving a set of ordinary differential equations. In particular, we can easily calculate the statistical properties of the time evolution of the network instantaneous response, the network response curve, the dynamic range, and the critical point in the framework of the approximate-master-equation approach. The possible usage of the proposed theoretical approach to other spreading phenomena is briefly discussed.
Park, Sungjin; Gildersleeve, Jeffrey C; Blixt, Klas Ola
2012-01-01
-based technology has been widely employed for rapid analysis of the glycan binding properties of lectins and antibodies, the quantitative measurements of glycan-protein interactions, detection of cells and pathogens, identification of disease-related anti-glycan antibodies for diagnosis, and fast assessment...... of substrate specificities of glycosyltransferases. This review covers the construction of carbohydrate microarrays, detection methods of carbohydrate microarrays and their applications in biological and biomedical research.......In the last decade, carbohydrate microarrays have been core technologies for analyzing carbohydrate-mediated recognition events in a high-throughput fashion. A number of methods have been exploited for immobilizing glycans on the solid surface in a microarray format. This microarray...
A general approach for cache-oblivious range reporting and approximate range counting
Afshani, Peyman; Hamilton, Chris; Zeh, Norbert
2010-01-01
We present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. Our main contribution is a general approach for constructing cache-oblivious data structures that provide relative (1+ε)-approximations for a general class of range...... counting queries. This class includes three-sided range counting in the plane, 3-d dominance counting, and 3-d halfspace range counting. The constructed data structures use linear space and answer queries in the optimal query bound of O(logB(N/K)) block transfers in the worst case, where K is the number...... of points in the query range. As a corollary, we also obtain the first approximate 3-d halfspace range counting and 3-d dominance counting data structures with a worst-case query time of O(log(N/K)) in internal memory. An easy but important consequence of our main result is the existence of -space cache...
Lim, C. W.; Wu, B. S.; He, L. H.
2001-12-01
A novel approach is presented for obtaining approximate analytical expressions for the dispersion relation of periodic wavetrains in the nonlinear Klein-Gordon equation with even potential function. By coupling linearization of the governing equation with the method of harmonic balance, we establish two general analytical approximate formulas for the dispersion relation, which depends on the amplitude of the periodic wavetrain. These formulas are valid for small as well as large amplitude of the wavetrain. They are also applicable to the large amplitude regime, which the conventional perturbation method fails to provide any solution, of the nonlinear system under study. Three examples are demonstrated to illustrate the excellent approximate solutions of the proposed formulas with respect to the exact solutions of the dispersion relation. (c) 2001 American Institute of Physics.
Luo LG
2004-07-01
Full Text Available CONTEXT: Thyrotropin releasing hormone (TRH, originally identified as a hypothalamic hormone, expresses in the pancreas. The effects of TRH such as, inhibiting amylase secretion in rats through a direct effect on acinar cells, enhancing basal glucagon secretion from isolated perfused rat pancreas, and potentiating glucose-stimulated insulin secretion in perfused rat islets and insulin-secreting clonal beta-cell lines, suggest that TRH may play a role in pancreas. TRH also enlarged pancreas and increased pancreatic DNA content but deletion of TRH gene expression caused hyperglycemia in mice, suggesting that TRH may play a critical role in pancreatic development; however, the biological mechanisms of TRH in the adult pancreas remains unclear. OBJECTIVES: This study explored the effect of TRH on rat pancreas. SUBJECTS: Four male-Sprague-Dawley-rats (200-250 g were given 10 microg/kg BW of TRH intraperitoneally on 1st and 3rd day and sacrificed on 7th day. Four same-strain rats without TRH injection served as controls. MAIN OUTCOME MEASURES: Wet pancreatic weights were measured. Pancreatic tissues were homogenized and extracted. The insulin levels of the extracts were measured by ELISA. Total RNA from the pancreases were fluorescently labeled and hybridized to microarray with 1,081 spot genes. RESULTS: TRH increased pancreatic wet weight and insulin contents. About 75% of the 1,081 genes were detected in the pancreas. TRH regulated up 99 genes and down 76 genes. The administration of TRH induced various types of gene expressions, such as G-protein coupled receptors (GPCR and signal transduction related genes (GPCR kinase 4, transducin beta subunit 5, arrestin beta1MAPK3, MAPK5, c-Src kinase, PKCs, PI3 kinase, growth factors (PDGF-B, IGF-2, IL-18, IGF-1, IL-2, IL-6, endothelin-1 and apoptotic factors (Bcl2, BAD, Bax. CONCLUSION: Reprogramming of transcriptome may be a way for TRH-regulation of pancreatic cellular functions.
New approach for alpha decay half-lives of superheavy nuclei and applicability of WKB approximation
Dong, Jianmin; Zuo, Wei; Scheid, Werner
2011-07-01
The α decay half-lives of recently synthesized superheavy nuclei (SHN) are calculated by applying a new approach which estimates them with the help of their neighbors based on some simple formulas. The estimated half-life values are in very good agreement with the experimental ones, indicating the reliability of the experimental observations and measurements to a large extent as well as the predictive power of our approach. The second part of this work is to test the applicability of the Wentzel-Kramers-Brillouin (WKB) approximation for the quantum mechanical tunneling probability. We calculated the accurate barrier penetrability for alpha decay along with proton and cluster radioactivity by numerically solving Schrödinger equation. The calculated results are compared with those of the WKB method to find that WKB approximation works well for the three physically analogical decay modes.
New approach for alpha decay half-lives of superheavy nuclei and applicability of WKB approximation
Dong Jianmin [Research Center for Hadron and CSR Physics, Lanzhou University and Institute of Modern Physics of CAS, Lanzhou 730000 (China); Graduate University of Chinese Academy of Sciences, Beijing 100049 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Institute for Theoretical Physics, Justus Liebig University, D-35392 Giessen (Germany); China Institute of Atomic Energy, P.O. Box 275(18), Beijing 102413 (China); Zuo Wei, E-mail: zuowei@impcas.ac.cn [Research Center for Hadron and CSR Physics, Lanzhou University and Institute of Modern Physics of CAS, Lanzhou 730000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Scheid, Werner [Institute for Theoretical Physics, Justus Liebig University, D-35392 Giessen (Germany)
2011-07-01
The {alpha} decay half-lives of recently synthesized superheavy nuclei (SHN) are calculated by applying a new approach which estimates them with the help of their neighbors based on some simple formulas. The estimated half-life values are in very good agreement with the experimental ones, indicating the reliability of the experimental observations and measurements to a large extent as well as the predictive power of our approach. The second part of this work is to test the applicability of the Wentzel-Kramers-Brillouin (WKB) approximation for the quantum mechanical tunneling probability. We calculated the accurate barrier penetrability for alpha decay along with proton and cluster radioactivity by numerically solving Schroedinger equation. The calculated results are compared with those of the WKB method to find that WKB approximation works well for the three physically analogical decay modes.
New approach for alpha decay half-lives of superheavy nuclei and applicability of WKB approximation
Dong, Jianmin; Scheid, Werner; 10.1016/j.nuclphysa.2011.06.016
2011-01-01
The alpha decay half-lives of recently synthesized superheavy nuclei (SHN) are calculated by applying a new approach which estimates them with the help of their neighbors based on some simple formulas. The estimated half-life values are in very good agreement with the experimental ones, indicating the reliability of the experimental observations and measurements to a large extent as well as the predictive power of our approach. The second part of this work is to test the applicability of the Wentzel-Kramers-Brillouin (WKB) approximation for the quantum mechanical tunneling probability. We calculated the accurate barrier penetrability for alpha decay along with proton and cluster radioactivity by numerically solving Schr\\"odinger equation. The calculated results are compared with those of the WKB method to find that WKB approximation works well for the three physically analogical decay modes.
Rough Set Approach to Approximation Reduction in Ordered Decision Table with Fuzzy Decision
Xiaoyan Zhang
2011-01-01
Full Text Available In practice, some of information systems are based on dominance relations, and values of decision attribute are fuzzy. So, it is meaningful to study attribute reductions in ordered decision tables with fuzzy decision. In this paper, upper and lower approximation reductions are proposed in this kind of complicated decision table, respectively. Some important properties are discussed. The judgement theorems and discernibility matrices associated with two reductions are obtained from which the theory of attribute reductions is provided in ordered decision tables with fuzzy decision. Moreover, rough set approach to upper and lower approximation reductions is presented in ordered decision tables with fuzzy decision as well. An example illustrates the validity of the approach, and results show that it is an efficient tool for knowledge discovery in ordered decision tables with fuzzy decision.
Approximating Spline filter: New Approach for Gaussian Filtering in Surface Metrology
Hao Zhang
2009-10-01
Full Text Available This paper presents a new spline filter named approximating spline filter for surface metrology. The purpose is to provide a new approach of Gaussian filter and evaluate the characteristics of an engineering surface more accurately and comprehensively. First, the configuration of approximating spline filter is investigated, which describes that this filter inherits all the merits of an ordinary spline filter e.g. no phase distortion and no end distortion. Then, the approximating coefficient selection is discussed, which specifies an important property of this filter-the convergence to Gaussian filter. The maximum approximation deviation between them can be controlled below 4.36% , moreover, be decreased to less than 1% when cascaded. Since extended to 2 dimensional (2D filter, the transmission deviation yields within -0.63% : +1.48% . It is proved that the approximating spline filter not only achieves the transmission characteristic of Gaussian filter, but also alleviates the end effect on a data sequence. The whole computational procedure is illustrated and applied to a work piece to acquire mean line whereas a simulated surface to mean surface. These experimental results indicate that this filtering algorithm for 11200 profile points and 2000 × 2000 form data, only spends 8ms and 2.3s respectively.
Tagging with DHARMA, a DHT-based Approach for Resource Mapping through Approximation
Aiello, Luca Maria; Ruffo, Giancarlo; Schifanella, Rossano; 10.1109/IPDPSW.2010.5470931
2011-01-01
We introduce collaborative tagging and faceted search on structured P2P systems. Since a trivial and brute force mapping of an entire folksonomy over a DHT-based system may reduce scalability, we propose an approximated graph maintenance approach. Evaluations on real data coming from Last.fm prove that such strategies reduce vocabulary noise (i.e., representation's overfitting phenomena) and hotspots issues.
Opper, Manfred; Winther, Ole
2001-01-01
We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables is r...... is required. our method adapts to the concrete couplings. We demonstrate the validity of our approach, which is so far restricted to models with nonglassy behavior? by replica calculations for a wide class of models as well as by simulations for a real data set....
Lessa-Aquino, Carolina; Borges Rodrigues, Camila; Pablo, Jozelyn; Sasaki, Rie; Jasinskas, Algis; Liang, Li; Wunder, Elsio A; Ribeiro, Guilherme S; Vigil, Adam; Galler, Ricardo; Molina, Douglas; Liang, Xiaowu; Reis, Mitermayer G; Ko, Albert I; Medeiros, Marco Alberto; Felgner, Philip L
2013-01-01
Leptospirosis is a widespread zoonotic disease worldwide. The lack of an adequate laboratory test is a major barrier for diagnosis, especially during the early stages of illness, when antibiotic therapy is most effective. Therefore, there is a critical need for an efficient diagnostic test for this life threatening disease. In order to identify new targets that could be used as diagnostic makers for leptopirosis, we constructed a protein microarray chip comprising 61% of Leptospira interrogans proteome and investigated the IgG response from 274 individuals, including 80 acute-phase, 80 convalescent-phase patients and 114 healthy control subjects from regions with endemic, high endemic, and no endemic transmission of leptospirosis. A nitrocellulose line blot assay was performed to validate the accuracy of the protein microarray results. We found 16 antigens that can discriminate between acute cases and healthy individuals from a region with high endemic transmission of leptospirosis, and 18 antigens that distinguish convalescent cases. Some of the antigens identified in this study, such as LipL32, the non-identical domains of the Lig proteins, GroEL, and Loa22 are already known to be recognized by sera from human patients, thus serving as proof-of-concept for the serodiagnostic antigen discovery approach. Several novel antigens were identified, including the hypothetical protein LIC10215 which showed good sensitivity and specificity rates for both acute- and convalescent-phase patients. Our study is the first large-scale evaluation of immunodominant antigens associated with naturally acquired leptospiral infection, and novel as well as known serodiagnostic leptospiral antigens that are recognized by antibodies in the sera of leptospirosis cases were identified. The novel antigens identified here may have potential use in both the development of new tests and the improvement of currently available assays for diagnosing this neglected tropical disease. Further
Carolina Lessa-Aquino
Full Text Available Leptospirosis is a widespread zoonotic disease worldwide. The lack of an adequate laboratory test is a major barrier for diagnosis, especially during the early stages of illness, when antibiotic therapy is most effective. Therefore, there is a critical need for an efficient diagnostic test for this life threatening disease.In order to identify new targets that could be used as diagnostic makers for leptopirosis, we constructed a protein microarray chip comprising 61% of Leptospira interrogans proteome and investigated the IgG response from 274 individuals, including 80 acute-phase, 80 convalescent-phase patients and 114 healthy control subjects from regions with endemic, high endemic, and no endemic transmission of leptospirosis. A nitrocellulose line blot assay was performed to validate the accuracy of the protein microarray results.We found 16 antigens that can discriminate between acute cases and healthy individuals from a region with high endemic transmission of leptospirosis, and 18 antigens that distinguish convalescent cases. Some of the antigens identified in this study, such as LipL32, the non-identical domains of the Lig proteins, GroEL, and Loa22 are already known to be recognized by sera from human patients, thus serving as proof-of-concept for the serodiagnostic antigen discovery approach. Several novel antigens were identified, including the hypothetical protein LIC10215 which showed good sensitivity and specificity rates for both acute- and convalescent-phase patients.Our study is the first large-scale evaluation of immunodominant antigens associated with naturally acquired leptospiral infection, and novel as well as known serodiagnostic leptospiral antigens that are recognized by antibodies in the sera of leptospirosis cases were identified. The novel antigens identified here may have potential use in both the development of new tests and the improvement of currently available assays for diagnosing this neglected tropical disease
Claudio Nicolini
2016-01-01
Full Text Available Using the New England BioLabs (NEBL SNAP-based Genes Expression in conjunction with our “sub-micron arrays” (Anodic Porous Allumina and/ or Kapton based Nanopores, we exploit our proprietary microarrays scanner (DNASER, DNA analyzer and Label Free Nanotechnologies to carry out the following tasks: 1 Construction of SNAP-based Genes Nanoarrays, using gold surface coated for 10 minutes with 2% solution of 3-Aminopropyltriethoxysilane (APTES in acetone, rinsed in acetone and dried with filtered air. Full length complementary DNAs (cDNAs for onco-suppressor 53 (p53, Cyclin-dependent kinase 2 (CDK2, SH2 (Src Homology 2 domain of the proto-oncogene tyrosine-protein kinase (Src and tyrosine-protein phosphatase non-receptor type 11 (PTPN11 were amplified and cloned. Printing mix was prepared with 0.66 μg/μl DNA capture reagent BG-PEG-NH2 for the one-step synthesis of SNAP-tag substrates from esters on labels or surfaces; 2 Determination of Protein-Protein Interaction for the chosen cancer following the identification of leader genes (or hub genes, investigated with theoretical ab initio bioinformatics analysis using in-house software and algorithms, and then experimentally confirmed via DNASER. These genes are expressed by PURE (Protein synthesis Using Recombinant Elements Express in spots less than 1 micron size piezo-microdispensed and then characterized via Label Free proprietary Autoflex Mass Spectrometry (MS integrated with ad hoc software, namely the Spectrum Analyzer and Data Set manager (SpADS and a proprietary Quartz Crystal Micro-balance with Dissipation factor monitoring (QCM_D Nanoconductimetry, enabling to describe properties such as changes in frequency and conductance, viscoelasticity and dissipation factor. Solutions without DNA were prepared (called Master Mix, MM, as negative controls, in printing mix. Negative controls were prepared with a varying concentration range of SNAP capture reagent. As a positive control (for
Jiang Xu-Cheng
2006-11-01
Full Text Available Abstract Background Currently available vaccines against leptospirosis are of low efficacy, have an unacceptable side-effect profile, do not induce long-term protection, and provide no cross-protection against the different serovars of pathogenic leptospira. The current major focus in leptospirosis research is to discover conserved protective antigens that may elicit longer-term protection against a broad range of Leptospira. There is a need to screen vaccine candidate genes in the genome of Leptospira interrogans. Results Bioinformatics, comparative genomic hybridization (CGH analysis and transcriptional analysis were used to identify vaccine candidates in the genome of L. interrogans serovar Lai strain #56601. Of a total of 4727 open reading frames (ORFs, 616 genes were predicted to encode surface-exposed proteins by P-CLASSIFIER combined with signal peptide prediction, α-helix transmembrane topology prediction, integral β-barrel outer membrane protein and lipoprotein prediction, as well as by retaining the genes shared by the two sequenced L. interrogans genomes and by subtracting genes with human homologues. A DNA microarray of L. interrogans strain #56601 was constructed for CGH analysis and transcriptome analysis in vitro. Three hundred and seven differential genes were identified in ten pathogenic serovars by CGH; 1427 genes had high transcriptional levels (Cy3 signal ≥ 342 and Cy5 signal ≥ 363.5, respectively. There were 565 genes in the intersection between the set encoding surface-exposed proteins and the set of 307 differential genes. The number of genes in the intersection between this set of 565 and the set of 1427 highly transcriptionally active genes was 226. These 226 genes were thus identified as putative vaccine candidates. The proteins encoded by these genes are not only potentially surface-exposed in the bacterium, but also conserved in two sequenced L. interrogans. Moreover, these genes are conserved among ten epidemic
Michaela Haider
2016-02-01
Full Text Available A double-hybridization approach was developed for the enzyme-free detection of specific mRNA of a housekeeping gene. Targeted mRNA was immobilized by hybridization to complementary DNA capture probes spotted onto a microarray. A second hybridization step of Cy5-conjugated label DNA to another section of the mRNA enabled specific labeling of the target. Thus, enzymatic artifacts could be avoided by omitting transcription and amplification steps. This manuscript describes the development of capture probe molecules used in the transcription- and amplification-free analysis of RPLP0 mRNA in isolated total RNA. An increase in specific signal was found with increasing length of the target-specific section of capture probes. Unspecific signal comprising spot autofluorescence and unspecific label binding did not correlate with the capture length. An additional spacer between the specific part of the capture probe and the substrate attachment site increased the signal significantly only on a short capture probe of approximately 30 nt length.
The Bayesian approximation error approach for electrical impedance tomography—experimental results
Nissinen, A.; Heikkinen, L. M.; Kaipio, J. P.
2008-01-01
Inverse problems can be characterized as problems that tolerate measurement and modelling errors poorly. While the measurement error issue has been widely considered as a solved problem, the modelling errors have remained largely untreated. The approximation and modelling errors can, however, be argued to dominate the measurement errors in most applications. There are several applications in which the temporal and memory requirements dictate that the computational complexity of the forward solver be radically reduced. For example, in process tomography the reconstructions have to be carried out typically in a few tens of milliseconds. Recently, a Bayesian approach for the treatment of approximation and modelling errors for inverse problems has been proposed. This approach has proven to work well in several classes of problems, but the approach has not been verified in any problem with real data. In this paper, we study two different types of modelling errors in the case of electrical impedance tomography: one related to model reduction and one concerning partially unknown geometry. We show that the approach is also feasible in practice and may facilitate the reduction of the computational complexity of the nonlinear EIT problem at least by an order of magnitude.
Valor, A; Robledo, L M
2000-01-01
Kamlah's second-order method for approximate particle number projection is applied for the first time to variational calculations with effective forces. High spin states of normal and superdeformed nuclei have been calculated with the finite range density dependent Gogny force for several nuclei. Advantages and drawbacks of the Kamlah second-order method as compared to the Lipkin-Nogami recipe are thoroughly discussed. We find that the Lipkin-Nogami prescription occasionally may fail to find the right energy minimum in the strong pairing regime and that Kamlah's second-order approach though providing better results than the LN one may break down in some limiting situations.
A variational approach to approximate particle number projection with effective forces
Valor, A; Robledo, L M
1999-01-01
Kamlah's second order method for approximate particle number projection is applied for the first time to variational calculations with effective forces. High spin states of normal and superdeformed nuclei have been calculated with the finite range density dependent Gogny force for several nuclei. Advantages and drawbacks of the Kamlah second order method as compared to the Lipkin-Nogami recipe are thoroughly discussed. We find that the Lipkin-Nogami prescription occasionally may fail to find the right energy minimum in the strong pairing regime and that Kamlah's second order approach, though providing better results than the LN one, may break down in some limiting situations.
Rough Set Based K-Exception Approach to Approximate Rule Reduction
ZHANG Feng; ZHANG Xianfeng; QIN Zhiguang; LIU Jinde
2003-01-01
There are rules refering to infrequent instances after the procession of attribute reduction and value reduction with traditional methods. A rough set RS based k-exception approach (RSKEA) to rule reduction is presented. Its main idea lies in a two-phase RS based rule reduction. An ordinary decision table is attained through general method of RS knowledge reduction in the first phase. Then a kexception candidate set is nominated according to the decision table. RS rule reduction is employed for the reformed source data set, which remove all the instances included in the k-exception set. We apply the approach to the automobile database. Results show that it can reduce the number and complexity of rules with adjustable conflict rate, which contributes to approximate rule reduction.
Kotelnikova, O.A.; Prudnikov, V.N. [Physical Faculty, Lomonosov State University, Department of Magnetism, Moscow (Russian Federation); Rudoy, Yu.G., E-mail: rudikar@mail.ru [People' s Friendship University of Russia, Department of Theoretical Physics, Moscow (Russian Federation)
2015-06-01
The aim of this paper is to generalize the microscopic approach to the description of the magnetocaloric effect (MCE) started by Kokorina and Medvedev (E.E. Kokorina, M.V. Medvedev, Physica B 416 (2013) 29.) by applying it to the anisotropic ferromagnet of the “easy axis” type in two settings—with external magnetic field parallel and perpendicular to the axis of easy magnetization. In the last case there appears the field induced (or spin-reorientation) phase transition which occurs at the critical value of the external magnetic field. This value is proportional to the exchange anisotropy constant at low temperatures, but with the rise of temperature it may be renormalized (as a rule, proportional to the magnetization). We use the explicit form of the Hamiltonian of the anisotropic ferromagnet and apply widely used random phase approximation (RPA) (known also as Tyablikov approximation in the Green function method) which is more accurate than the well known molecular field approximation (MFA). It is shown that in the first case the magnitude of MCE is raised whereas in the second one the MCE disappears due to compensation of the critical field renormalized with the magnetization.
Elton Rexhepaj
Full Text Available AIMS: Immunohistochemistry is a routine practice in clinical cancer diagnostics and also an established technology for tissue-based research regarding biomarker discovery efforts. Tedious manual assessment of immunohistochemically stained tissue needs to be fully automated to take full advantage of the potential for high throughput analyses enabled by tissue microarrays and digital pathology. Such automated tools also need to be reproducible for different experimental conditions and biomarker targets. In this study we present a novel supervised melanoma specific pattern recognition approach that is fully automated and quantitative. METHODS AND RESULTS: Melanoma samples were immunostained for the melanocyte specific target, Melan-A. Images representing immunostained melanoma tissue were then digitally processed to segment regions of interest, highlighting Melan-A positive and negative areas. Color deconvolution was applied to each region of interest to separate the channel containing the immunohistochemistry signal from the hematoxylin counterstaining channel. A support vector machine melanoma classification model was learned from a discovery melanoma patient cohort (n = 264 and subsequently validated on an independent cohort of melanoma patient tissue sample images (n = 157. CONCLUSION: Here we propose a novel method that takes advantage of utilizing an immuhistochemical marker highlighting melanocytes to fully automate the learning of a general melanoma cell classification model. The presented method can be applied on any protein of interest and thus provides a tool for quantification of immunohistochemistry-based protein expression in melanoma.
M. Sadegh
2013-04-01
Full Text Available In recent years, a strong debate has emerged in the hydrologic literature how to properly treat non-traditional error residual distributions and quantify parameter and predictive uncertainty. Particularly, there is strong disagreement whether such uncertainty framework should have its roots within a proper statistical (Bayesian context using Markov chain Monte Carlo (MCMC simulation techniques, or whether such a framework should be based on a quite different philosophy and implement informal likelihood functions and simplistic search methods to summarize parameter and predictive distributions. In this paper we introduce an alternative framework, called Approximate Bayesian Computation (ABC that summarizes the differing viewpoints of formal and informal Bayesian approaches. This methodology has recently emerged in the fields of biology and population genetics and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics that measure the distance of each model simulation to the data. This paper is a follow up of the recent publication of Nott et al. (2012 and further studies the theoretical and numerical equivalence of formal (DREAM and informal (GLUE Bayesian approaches using data from different watersheds in the United States. We demonstrate that the limits of acceptability approach of GLUE is a special variant of ABC in which each discharge observation of the calibration data set is used as a summary diagnostic.
Patankar, Neelesh A
2010-06-01
Recent experimental work has successfully revealed pressure induced transition from Cassie to Wenzel state on rough hydrophobic substrates. Formulas, based on geometric considerations and imposed pressure, have been developed as transition criteria. In the past, transition has also been considered as a process of overcoming the energy barrier between the Cassie and Wenzel states. A unified understanding of the various considerations of transition has not been apparent. To address this issue, in this work, we consolidate the transition criteria with a homogenized energy minimization approach. This approach decouples the problem of minimizing the energy to wet the rough substrate, from the energy of the macroscopic drop. It is seen that the transition from Cassie to Wenzel state, due to depinning of the liquid-air interface, emerges from the approximate energy minimization approach if the pressure-volume energy associated with the impaled liquid in the roughness is included. This transition can be viewed as a process in which the work done by the pressure force is greater than the barrier due to the surface energy associated with wetting the roughness. It is argued that another transition mechanism, due to a sagging liquid-air interface that touches the bottom of the roughness grooves, is not typically relevant if the substrate roughness is designed such that the Cassie state is at lower energy compared to the Wenzel state.
On the Markovian assumption in the excursion set approach: The approximation of Markov Velocities
Musso, Marcello
2014-01-01
The excursion set approach uses the statistics of the density field, smoothed on a wide range of scales, to gain insight into a number of interesting processes in nonlinear structure formation, such as cluster assembly, merging and clustering. The approach treats the curve defined by the overdensity fluctuation field when changing the smoothing scale as a random walk. Most implementations of the approach then assume that, at least to a first approximation, the walks have uncorrelated steps, so that the walk heights are a Markov process. This assumption is known to be inaccurate: smoothing filters that are most easily related to the physics of structure formation generically yield walks whose steps are correlated with one another. We develop models in which it is the steps, rather than the walk heights, that are a Markov process. In such models, which we call Markov Velocity processes, each step correlates only with the previous one. We show that TopHat smoothing of a power law power spectrum with index n = -2...
Worldline Variational Approximation: A New Approach to the Relativistic Binding Problem
Barro-Bergflodt, K; Stingl, M
2004-01-01
We determine the lowest bound-state pole of the density-density correlator in the scalar Wick-Cutkosky model where two equal-mass constituents interact via the exchange of mesons. This is done by employing the worldline representation of field theory together with a variational approximation as in Feynman's treatment of the polaron. Unlike traditional methods based on the Bethe-Salpeter equation, self-energy and vertex corrections are (approximately) included as are crossed diagrams. Only vacuum-polarization effects of the heavy particles are neglected. The well-known instability of the model due to self-energy effects leads to large qualitative and quantitative changes compared to traditional approaches which neglect them. We determine numerically the critical coupling constant above which no real solutions of the variational equations exist anymore and show that it is smaller than in the one-body case due to an induced instability. The width of the bound state above the critical coupling is estimated analyt...
A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio
Günther, Elisabeth; Megow, Nicole; Wiese, Andreas
2012-01-01
We propose a new approach to competitive analysis in online scheduling by introducing the novel concept of online approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also general- ize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possi- ble competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obta...
Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.
Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole
2015-05-12
Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.
Mosaffa Amirhossein
2013-01-01
Full Text Available Results are reported of an investigation of the solidification of a phase change material (PCM in a cylindrical shell thermal energy storage with radial internal fins. An approximate analytical solution is presented for two cases. In case 1, the inner wall is kept at a constant temperature and, in case 2, a constant heat flux is imposed on the inner wall. In both cases, the outer wall is insulated. The results are compared to those for a numerical approach based on an enthalpy method. The results show that the analytical model satisfactory estimates the solid-liquid interface. In addition, a comparative study is reported of the solidified fraction of encapsulated PCM for different geometric configurations of finned storage having the same volume and surface area of heat transfer.
Big Data meets Quantum Chemistry Approximations: The $\\Delta$-Machine Learning Approach
Ramakrishnan, Raghunathan; Rupp, Matthias; von Lilienfeld, O Anatole
2015-01-01
Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k constitutional isomers of C$_7$H$_{10}$O$_2$ we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semi-empirical quantum chemistry and machine learning models trained on 1 and 10\\% of 134k organ...
Aggerbeck Lawrence P
2008-02-01
Full Text Available Abstract Background Cholesterol homeostasis and xenobiotic metabolism are complex biological processes, which are difficult to study with traditional methods. Deciphering complex regulation and response of these two processes to different factors is crucial also for understanding of disease development. Systems biology tools as are microarrays can importantly contribute to this knowledge and can also discover novel interactions between the two processes. Results We have developed a low density Sterolgene v0 cDNA microarray dedicated to studies of cholesterol homeostasis and drug metabolism in the mouse. To illustrate its performance, we have analyzed mouse liver samples from studies focused on regulation of cholesterol homeostasis and drug metabolism by diet, drugs and inflammation. We observed down-regulation of cholesterol biosynthesis during fasting and high-cholesterol diet and subsequent up-regulation by inflammation. Drug metabolism was down-regulated by fasting and inflammation, but up-regulated by phenobarbital treatment and high-cholesterol diet. Additionally, the performance of the Sterolgene v0 was compared to the two commercial high density microarray platforms: the Agilent cDNA (G4104A and the Affymetrix MOE430A GeneChip. We hybridized identical RNA samples to the commercial microarrays and showed that the performance of Sterolgene is comparable to commercial arrays in terms of detection of changes in cholesterol homeostasis and drug metabolism. Conclusion Using the Sterolgene v0 microarray we were able to detect important changes in cholesterol homeostasis and drug metabolism caused by diet, drugs and inflammation. Together with its next generations the Sterolgene microarrays represent original and dedicated tools enabling focused and cost effective studies of cholesterol homeostasis and drug metabolism. These microarrays have the potential of being further developed into screening or diagnostic tools.
Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.
2014-07-01
To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set of $\\sim$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $\\Omega_m, w_0, \\alpha$ and $\\beta$ and a magnitude offset parameter, with no systematics we obtain $\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$ (a $\\sim11$% 1$\\sigma$ uncertainty) using the Tripp metric and $\\Delta(w_0) = -0.055\\pm0.068$ (a $\\sim7$% 1$\\sigma$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $\\Delta(w_0) = -0.062\\pm0.132$ (a $\\sim14$% 1$\\sigma$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $w_0$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.
An Optimized Approach for Extracting Approximate Functional Dependencies in XML Documents
无
2006-01-01
In this paper, the definition of approximate XFDs based on value equality is proposed. Two metrics, support and strength, are presented for measuring the degree of approximate XFD. A basic algorithm is designed for extracting minimal set of approximate XFDs, and then two optimized strategies are proposed to improve the performance. Finally, the experimental results show that the optimized algorithms are correct and effective.
Deblauwe, I; de Witte, J C; de Deken, G; de Deken, R; Madder, M; van Erk, S; Hoza, F A; Lathouwers, D; Geysen, D
2012-03-01
Culicoides species of the Obsoletus group (Diptera: Ceratopogonidae) are potential vectors of bluetongue virus serotype 8 (BTV 8), which was introduced into central Western Europe in 2006. Correct morphological species identification of Obsoletus group females is especially difficult and molecular identification is the method of choice. In this study we present a new molecular tool based on probe hybridization using a DNA microarray format to identify Culicoides species of the Obsoletus group. The internal transcribed spacer 1 (ITS1) gene sequences of 55 Culicoides belonging to 13 different species were determined and used, together with 19 Culicoides ITS1 sequences sourced from GenBank, to design species-specific probes for the microarray test. This test was evaluated using the amplified ITS1 sequences of another 85 Culicoides specimens, belonging to 11 species. The microarray test successfully identified all samples (100%) of the Obsoletus group, identifying each specimen to species level within the group. This test has several advantages over existing polymerase chain reaction (PCR)-based molecular tools, including possible capability for parallel analysis of many species, high sensitivity and specificity, and low background signal noise. Hand-spotting of the microarray slide and the use of detection chemistry make this alternative technique affordable and feasible for any diagnostic laboratory with PCR facilities.
An approximation of the Cioslowski-Mixon bond order indexes using the AlteQ approach
Salmina, Elena; Grishina, Maria A.; Potemkin, Vladimir A.
2013-09-01
Fast and reliable prediction of bond orders in organic systems based upon experimentally measured quantities can be performed using electron density features at bond critical points (J Am Chem Soc 105:5061-5068, 1983; J Phys Org Chem 16:133-141, 2003; Acta Cryst B 61:418-428, 2005; Acta Cryst B 63:142-150, 2007). These features are outcomes of low-temperature high-resolution X-ray diffraction experiments. However, a time-consuming procedure of gaining these quantities makes the prediction limited. In the present work we have employed an empirical approach AlteQ (J Comput Aided Mol Des 22:489-505, 2008) for evaluation of electron density properties. This approach uses a simple exponential function derived from comparison of electron density, gained from high-resolution X-ray crystallography, and distance to atomic nucleus what allows calculating density distribution in time-saving manner and gives results which are very close to experimental ones. As input data AlteQ accepts atomic coordinates of isolated molecules or molecular ensembles (for instance, protein-protein complexes or complexes of small molecules with proteins, etc.). Using AlteQ characteristics we have developed regression models predicting Cioslowski-Mixon bond order (CMBO) indexes (J Am Chem Soc 113(42):4142-4145, 1991). The models are characterized by high correlation coefficients lying in the range from 0.844 to 0.988 dependently on the type of covalent bond, thereby providing a bonding quantification that is in reasonable agreement with that obtained by orbital theory. Comparative analysis of CMBOs approximated using topological properties of AlteQ and experimental electron densities has shown that the models can be used for fast determination of bond orders directly from X-ray crystallography data and confirmed that AlteQ characteristics can replace experimental ones with satisfactory extent of accuracy.
Cristian Rodriguez Rivero
2014-07-01
Full Text Available The annual estimate of the availability of the amount of water for the agricultural sector has become a lifetime in places where rainfall is scarce, as is the case of northwestern Argentina. This work proposes to model and simulate monthly rainfall time series from one geographical location of Catamarca, Valle El Viejo Portezuelo. In this sense, the time series prediction is mathematical and computational modelling series provided by monthly cumulative rainfall, which has stochastic output approximated by neural networks Bayesian approach. We propose to use an algorithm based on artificial neural networks (ANNs using the Bayesian inference. The result of the prediction consists of 20% of the provided data consisting of 2000 to 2010. A new analysis for modelling, simulation and computational prediction of cumulative rainfall from one geographical location is well presented. They are used as data information, only the historical time series of daily flows measured in mmH2O. Preliminary results of the annual forecast in mmH2O with a prediction horizon of one year and a half are presented, 18 months, respectively. The methodology employs artificial neural network based tools, statistical analysis and computer to complete the missing information and knowledge of the qualitative and quantitative behavior. They also show some preliminary results with different prediction horizons of the proposed filter and its comparison with the performance Gaussian process filter used in the literature.
All-coupling polaron optical response: Analytic approaches beyond the adiabatic approximation
Klimin, S. N.; Tempere, J.; Devreese, J. T.
2016-09-01
In the present work, the problem of an all-coupling analytic description for the optical conductivity of the Fröhlich polaron is treated, with the goal being to bridge the gap in the validity range that exists between two complementary methods: on the one hand, the memory-function formalism and, on the other hand, the strong-coupling expansion based on the Franck-Condon picture for the polaron response. At intermediate coupling, both methods were found to fail as they do not reproduce diagrammatic quantum Monte Carlo results. To resolve this, we modify the memory-function formalism with respect to the Feynman-Hellwarth-Iddings-Platzman approach in order to take into account a nonquadratic interaction in a model system for the polaron. The strong-coupling expansion is extended beyond the adiabatic approximation by including in the treatment nonadiabatic transitions between excited polaron states. The polaron optical conductivity that we obtain at T =0 by combining the two extended methods agrees well, both qualitatively and quantitatively, with the diagrammatic quantum Monte Carlo results in the whole available range of the electron-phonon coupling strength.
A new embedded-atom method approach based on the pth moment approximation
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-01
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
Zaffalon Valerio
2011-06-01
Full Text Available Abstract Background Field observations and a few physiological studies have demonstrated that peach embryogenesis and fruit development are tightly coupled. In fact, attempts to stimulate parthenocarpic fruit development by means of external tools have failed. Moreover, physiological disturbances during early embryo development lead to seed abortion and fruitlet abscission. Later in embryo development, the interactions between seed and fruit development become less strict. As there is limited genetic and molecular information about seed-pericarp cross-talk and development in peach, a massive gene approach based on the use of the μPEACH 1.0 array platform and quantitative real time RT-PCR (qRT-PCR was used to study this process. Results A comparative analysis of the transcription profiles conducted in seed and mesocarp (cv Fantasia throughout different developmental stages (S1, S2, S3 and S4 evidenced that 455 genes are differentially expressed in seed and fruit. Among differentially expressed genes some were validated as markers in two subsequent years and in three different genotypes. Seed markers were a LTP1 (lipid transfer protein, a PR (pathogenesis-related protein, a prunin and LEA (Late Embryogenesis Abundant protein, for S1, S2, S3 and S4, respectively. Mesocarp markers were a RD22-like protein, a serin-carboxypeptidase, a senescence related protein and an Aux/IAA, for S1, S2, S3 and S4, respectively. The microarray data, analyzed by using the HORMONOMETER platform, allowed the identification of hormone-responsive genes, some of them putatively involved in seed-pericarp crosstalk. Results indicated that auxin, cytokinins, and gibberellins are good candidates, acting either directly (auxin or indirectly as signals during early development, when the cross-talk is more active and vital for fruit set, whereas abscisic acid and ethylene may be involved later on. Conclusions In this research, genes were identified marking different phases of
Lan Chung-Yu
2008-09-01
Full Text Available Abstract Background Inflammation is a hallmark of many human diseases. Elucidating the mechanisms underlying systemic inflammation has long been an important topic in basic and clinical research. When primary pathogenetic events remains unclear due to its immense complexity, construction and analysis of the gene regulatory network of inflammation at times becomes the best way to understand the detrimental effects of disease. However, it is difficult to recognize and evaluate relevant biological processes from the huge quantities of experimental data. It is hence appealing to find an algorithm which can generate a gene regulatory network of systemic inflammation from high-throughput genomic studies of human diseases. Such network will be essential for us to extract valuable information from the complex and chaotic network under diseased conditions. Results In this study, we construct a gene regulatory network of inflammation using data extracted from the Ensembl and JASPAR databases. We also integrate and apply a number of systematic algorithms like cross correlation threshold, maximum likelihood estimation method and Akaike Information Criterion (AIC on time-lapsed microarray data to refine the genome-wide transcriptional regulatory network in response to bacterial endotoxins in the context of dynamic activated genes, which are regulated by transcription factors (TFs such as NF-κB. This systematic approach is used to investigate the stochastic interaction represented by the dynamic leukocyte gene expression profiles of human subject exposed to an inflammatory stimulus (bacterial endotoxin. Based on the kinetic parameters of the dynamic gene regulatory network, we identify important properties (such as susceptibility to infection of the immune system, which may be useful for translational research. Finally, robustness of the inflammatory gene network is also inferred by analyzing the hubs and "weak ties" structures of the gene network
Garcia-Mas Jordi
2009-10-01
Full Text Available Abstract Background Melon (Cucumis melo is a horticultural specie of significant nutritional value, which belongs to the Cucurbitaceae family, whose economic importance is second only to the Solanaceae. Its small genome of approx. 450 Mb coupled to the high genetic diversity has prompted the development of genetic tools in the last decade. However, the unprecedented existence of a transcriptomic approaches in melon, highlight the importance of designing new tools for high-throughput analysis of gene expression. Results We report the construction of an oligo-based microarray using a total of 17,510 unigenes derived from 33,418 high-quality melon ESTs. This chip is particularly enriched with genes that are expressed in fruit and during interaction with pathogens. Hybridizations for three independent experiments allowed the characterization of global gene expression profiles during fruit ripening, as well as in response to viral and fungal infections in plant cotyledons and roots, respectively. Microarray construction, statistical analyses and validation together with functional-enrichment analysis are presented in this study. Conclusion The platform validation and enrichment analyses shown in our study indicate that this oligo-based microarray is amenable for future genetic and functional genomic studies of a wide range of experimental conditions in melon.
A numerical approach to the approximate and the exact minimum rank of a covariance matrix
ten Berge, Jos M.F.; Kiers, Henk A.L.
A concept of approximate minimum rank for a covariance matrix is defined, which contains the (exact) minimum rank as a special case. A computational procedure to evaluate the approximate minimum rank is offered. The procedure yields those proper communalities for which the unexplained common
A conceptual approach to approximate tree root architecture in infinite slope models
Schmaltz, Elmar; Glade, Thomas
2016-04-01
Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic
On the choice of phase in the CS approximation: Integral equation approach
Snider, R. F.
1986-10-01
As usually presented, the centrifugal sudden approximation examines how each individual angular momentum component of the scattering wave function and scattering matrix element is to be estimated. Here the three-dimensional structure of the scattering wave function is emphasized and any decisions about associating exact and approximate partial waves with correcting phase factors are put off till the implications on the three-dimensional wave function have been made clear. It is found that no correcting phase factor needs to be applied to the CS S matrix when estimating the exact S matrix, while at the same time, the asymptotic behavior of the exact and approximating wave functions has the same form. Direct and operator methods of estimating the transition matrix confirm this conclusion. A suggested modification of how the sudden approximations are to be implemented allows the weak potential limit of the resulting scattering amplitudes to reduce to the Born approximation.
2008-01-01
Abstract Background Cholesterol homeostasis and xenobiotic metabolism are complex biological processes, which are difficult to study with traditional methods. Deciphering complex regulation and response of these two processes to different factors is crucial also for understanding of disease development. Systems biology tools as are microarrays can importantly contribute to this knowledge and can also discover novel interactions between the two processes. Results We have developed a low densit...
Lessa-Aquino, Carolina; Rodrigues, Camila Borges; Ribeiro, Guilherme S. et al.
2013-01-01
Leptospirosis is a widespread zoonotic disease worldwide. The lack of an adequate laboratory test is a major barrier for diagnosis, especially during the early stages of illness, when antibiotic therapy is most effective. Therefore, there is a critical need for an efficient diagnostic test for this life threatening disease. Methodology: In order to identify new targets that could be used as diagnostic makers for leptopirosis, we constructed a protein microarray chip comprising 61% of Le...
Aquino, Carolina Lessa; Rodrigues, Camila Borges; Pablo, Jozelyn; Sasaki, Rie; Jasinskas, Algis; Guilherme S Ribeiro; Vigil, Adam
2013-01-01
p. 1-13 Background: Leptospirosis is a widespread zoonotic disease worldwide. The lack of an adequate laboratory test is a major barrier for diagnosis, especially during the early stages of illness, when antibiotic therapy is most effective. Therefore, there is a critical need for an efficient diagnostic test for this life threatening disease. Methodology: In order to identify new targets that could be used as diagnostic makers for leptopirosis, we constructed a protein microarray chip ...
Advanced spot quality analysis in two-colour microarray experiments
Vetter Guillaume
2008-09-01
Full Text Available Abstract Background Image analysis of microarrays and, in particular, spot quantification and spot quality control, is one of the most important steps in statistical analysis of microarray data. Recent methods of spot quality control are still in early age of development, often leading to underestimation of true positive microarray features and, consequently, to loss of important biological information. Therefore, improving and standardizing the statistical approaches of spot quality control are essential to facilitate the overall analysis of microarray data and subsequent extraction of biological information. Findings We evaluated the performance of two image analysis packages MAIA and GenePix (GP using two complementary experimental approaches with a focus on the statistical analysis of spot quality factors. First, we developed control microarrays with a priori known fluorescence ratios to verify the accuracy and precision of the ratio estimation of signal intensities. Next, we developed advanced semi-automatic protocols of spot quality evaluation in MAIA and GP and compared their performance with available facilities of spot quantitative filtering in GP. We evaluated these algorithms for standardised spot quality analysis in a whole-genome microarray experiment assessing well-characterised transcriptional modifications induced by the transcription regulator SNAI1. Using a set of RT-PCR or qRT-PCR validated microarray data, we found that the semi-automatic protocol of spot quality control we developed with MAIA allowed recovering approximately 13% more spots and 38% more differentially expressed genes (at FDR = 5% than GP with default spot filtering conditions. Conclusion Careful control of spot quality characteristics with advanced spot quality evaluation can significantly increase the amount of confident and accurate data resulting in more meaningful biological conclusions.
Figari, Francesco; Iacovou, Maria; Skew, Alexandra J.; Sutherland, Holly
2012-01-01
In this paper, we evaluate income distributions in four European countries (Austria, Italy, Spain and Hungary) using two complementary approaches: a standard approach based on reported incomes in survey data, and a microsimulation approach, where taxes and benefits are simulated. These two approaches may be expected to generate slightly different…
Sango, Aaron; McCarter, Yvette S; Johnson, Donald; Ferreira, Jason; Guzman, Nilmarie; Jankowski, Christopher A
2013-12-01
Enterococci are a major cause of bloodstream infections in hospitalized patients and have limited antimicrobial treatment options due to their many resistance mechanisms. Molecular technologies have significantly shortened the time to enterococcal isolate identification compared with conventional methods. We evaluated the impact of rapid organism identification and resistance detection with the Verigene Gram-positive blood culture microarray assay on clinical and economic outcomes for patients with enterococcal bacteremia. A single-center preintervention/postintervention quasiexperimental study compared inpatients with enterococcal bacteremia from 1 February 2012 to 9 September 2012 (preintervention period) and 10 September 2012 to 28 February 2013 (postintervention period). An infectious disease and/or critical care pharmacist was contacted with the microarray assay results, and effective antibiotics were recommended. The clinical and economic outcomes for 74 patients were assessed. The mean time to appropriate antimicrobial therapy was 23.4 h longer in the preintervention group than in the postintervention group (P = 0.0054). A nonsignificant decrease in the mean time to appropriate antimicrobial therapy was seen for patients infected with vancomycin-susceptible Enterococcus isolates (P = 0.1145). For patients with vancomycin-resistant Enterococcus bacteremia, the mean time to appropriate antimicrobial therapy was 31.1 h longer in the preintervention group than in the postintervention group (P < 0.0001). In the postintervention group, the hospital length of stay was significantly 21.7 days shorter (P = 0.0484) and mean hospital costs were $60,729 lower (P = 0.02) than in the preintervention group. The rates of attributed deaths in the two groups were not statistically different. Microarray technology, supported by pharmacy and microbiology departments, can decrease the time to appropriate antimicrobial therapy, the hospital length of stay, and health care costs.
Two Approaches to the Calculation of Approximate Symmetry of Ostrovsky Equation with Small Parameter
Mahdavi, Abolhassan; Nadjafikhah, Mehdi; Toomanian, Megerdich
2015-12-01
In this paper, two methods of approximate symmetries for partial differential equations with a small parameter are applied to a perturbed nonlinear Ostrovsky equation. To compute the first-order approximate symmetry, we have applied two methods which one of them was proposed by Baikov et al. in which the infinitesimal generator is expanded in a perturbation series; whereas the other method by Fushchich and Shtelen [3] is based on the expansion of the dependent variables in perturbation series. Especially, an optimal system of one dimensional subalgebras is constructed and some invariant solutions corresponding to the resulted symmetries are obtained.
Two Approaches to the Calculation of Approximate Symmetry of Ostrovsky Equation with Small Parameter
Mahdavi, Abolhassan, E-mail: ad.mahdavi@kiau.ac.ir [Karaj Branch Islamic University, Department of Mathematics (Iran, Islamic Republic of); Nadjafikhah, Mehdi, E-mail: mnadjafikhah@iust.ac.ir [Iran University of Science and Technology, School of Mathematics (Iran, Islamic Republic of); Toomanian, Megerdich, E-mail: megerdich.toomanian@kiau.ac.ir [Karaj Branch Islamic University, Department of Mathematics (Iran, Islamic Republic of)
2015-12-15
In this paper, two methods of approximate symmetries for partial differential equations with a small parameter are applied to a perturbed nonlinear Ostrovsky equation. To compute the first-order approximate symmetry, we have applied two methods which one of them was proposed by Baikov et al. in which the infinitesimal generator is expanded in a perturbation series; whereas the other method by Fushchich and Shtelen [3] is based on the expansion of the dependent variables in perturbation series. Especially, an optimal system of one dimensional subalgebras is constructed and some invariant solutions corresponding to the resulted symmetries are obtained.
A New Distribution Family for Microarray Data
Diana Mabel Kelmansky
2017-02-01
Full Text Available The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative stand point taken here is to search for models that ﬁt the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. Rcodes are available from the authors upon request.
Popov, Vladislav; Lavrinenko, Andrei; Novitsky, Andrey
2016-01-01
that the zeroth-, first-, and second-order approximations of the operator effective medium theory correspond to electric dipoles, chirality, and magnetic dipoles plus electric quadrupoles, respectively. We discover that the spatially dispersive bianisotropic effective medium obtained in the second...... of metamaterials and subwavelength nanophotonics....
Rodriguez-Lanetty, Mauricio; Phillips, Wendy S; Dove, Sophie; Hoegh-Guldberg, Ove; Weis, Virginia M
2008-04-24
Research in gene function using Quantitative Reverse Transcription PCR (q-RT-PCR) and microarray approaches are emerging and just about to explode in the field of coral and cnidarian biology. These approaches are showing the great potential to significantly advance our understanding of how corals respond to abiotic and biotic stresses, and how host cnidarians/dinoflagellates symbioses are maintained and regulated. With these genomic advances, however, new analytical challenges are also emerging, such as the normalization of gene expression data derived from q-RT-PCR. In this study, an effective analytical method is introduced to identify candidate housekeeping genes (HKG) from a sea anemone (Anthopleura elegantissima) cDNA microarray platform that can be used as internal control genes to normalize q-RT-PCR gene expression data. It is shown that the identified HKGs were stable among the experimental conditions tested in this study. The three most stables genes identified, in term of gene expression, were beta-actin, ribosomal protein L12, and a Poly(a) binding protein. The applications of these HKGs in other cnidarian systems are further discussed.
Ishak Altun
2016-01-01
Full Text Available We provide sufficient conditions for the existence of a unique common fixed point for a pair of mappings T,S:X→X, where X is a nonempty set endowed with a certain metric. Moreover, a numerical algorithm is presented in order to approximate such solution. Our approach is different to the usual used methods in the literature.
Two-layer interfacial flows beyond the Boussinesq approximation: a Hamiltonian approach
Camassa, R.; Falqui, G.; Ortenzi, G.
2017-02-01
The theory of integrable systems of Hamiltonian PDEs and their near-integrable deformations is used to study evolution equations resulting from vertical-averages of the Euler system for two-layer stratified flows in an infinite two-dimensional channel. The Hamiltonian structure of the averaged equations is obtained directly from that of the Euler equations through the process of Hamiltonian reduction. Long-wave asymptotics together with the Boussinesq approximation of neglecting the fluids’ inertia is then applied to reduce the leading order vertically averaged equations to the shallow-water Airy system, albeit in a non-trivial way. The full non-Boussinesq system for the dispersionless limit can then be viewed as a deformation of this well known equation. In a perturbative study of this deformation, a family of approximate constants of the motion are explicitly constructed and used to find local solutions of the evolution equations by means of hodograph-like formulae.
New approach for alpha decay half-lives of superheavy nuclei and applicability of WKB approximation
Dong, Jianmin; Zuo, Wei; Scheid, Werner
2011-01-01
The alpha decay half-lives of recently synthesized superheavy nuclei (SHN) are calculated by applying a new approach which estimates them with the help of their neighbors based on some simple formulas. The estimated half-life values are in very good agreement with the experimental ones, indicating the reliability of the experimental observations and measurements to a large extent as well as the predictive power of our approach. The second part of this work is to test the applicability of the ...
Integrated Amplification Microarrays for Infectious Disease Diagnostics
Darrell P. Chandler
2012-11-01
Full Text Available This overview describes microarray-based tests that combine solution-phase amplification chemistry and microarray hybridization within a single microfluidic chamber. The integrated biochemical approach improves microarray workflow for diagnostic applications by reducing the number of steps and minimizing the potential for sample or amplicon cross-contamination. Examples described herein illustrate a basic, integrated approach for DNA and RNA genomes, and a simple consumable architecture for incorporating wash steps while retaining an entirely closed system. It is anticipated that integrated microarray biochemistry will provide an opportunity to significantly reduce the complexity and cost of microarray consumables, equipment, and workflow, which in turn will enable a broader spectrum of users to exploit the intrinsic multiplexing power of microarrays for infectious disease diagnostics.
Chou, Chia-Chun
2017-02-01
The Schrödinger-Langevin equation is approximately solved by propagating individual quantum trajectories for barrier transmission problems. Equations of motion are derived through use of the derivative propagation method, which leads to a hierarchy of coupled differential equations for the amplitude of the wave function and the spatial derivatives of the complex action along each trajectory. Computational results are presented for a one-dimensional Eckart barrier and a two-dimensional system involving either a thick or thin Eckart barrier along the reaction coordinate coupled to a harmonic oscillator. Frictional effects on the trajectory, the transmitted wave packet, and the transmission probability are analyzed.
Evans, Jason; Sullivan, Jack
2011-01-01
A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.
Dr.A.B.Deoghare
2012-05-01
Full Text Available The modern treatment to solve differential equations of problems relies heavily on approximation methods. To keep the discussion simple while maintaining a general formulation of practical interest in engineering, the model problem considered is of an axially loaded bar having quadratic function of area. The unknown variable is the axial displacement of one dimensional continuum, u(x is attempted in the present research by means of numerical analysis technique where the basic inputs to a problem are known with arbitrary basic data.Numerically evaluating differential and integral is a rather common and usually stable task. Attempting the numerical solution with different approximation methods leads the errors while solving differential equations of the system. A genuine necessity for obtaining precise solution for the different numerical approximationmethods is overcome by developing in-house computer program. The developed code is resourceful enough to conquer the calculation result from round-off of arithmetic processes or truncation. The program is interactive and user friendly in operation to change the desired inputs. Graphically results can be displayed to know theeffect of considered weights and the constants assumed.
Phenotypic MicroRNA Microarrays
2013-01-01
Microarray technology has become a very popular approach in cases where multiple experiments need to be conducted repeatedly or done with a variety of samples. In our lab, we are applying our high density spots microarray approach to microscopy visualization of the effects of transiently introduced siRNA or cDNA on cellular morphology or phenotype. In this publication, we are discussing the possibility of using this micro-scale high throughput process to study the role of microRNAs in the bio...
Pin Carmen
2007-11-01
Full Text Available Abstract Background Microarrays are widely used for the study of gene expression; however deciding on whether observed differences in expression are significant remains a challenge. Results A computing tool (ArrayLeaRNA has been developed for gene expression analysis. It implements a Bayesian approach which is based on the Gumbel distribution and uses printed genomic DNA control features for normalization and for estimation of the parameters of the Bayesian model and prior knowledge from predicted operon structure. The method is compared with two other approaches: the classical LOWESS normalization followed by a two fold cut-off criterion and the OpWise method (Price, et al. 2006. BMC Bioinformatics. 7, 19, a published Bayesian approach also using predicted operon structure. The three methods were compared on experimental datasets with prior knowledge of gene expression. With ArrayLeaRNA, data normalization is carried out according to the genomic features which reflect the results of equally transcribed genes; also the statistical significance of the difference in expression is based on the variability of the equally transcribed genes. The operon information helps the classification of genes with low confidence measurements. ArrayLeaRNA is implemented in Visual Basic and freely available as an Excel add-in at http://www.ifr.ac.uk/safety/ArrayLeaRNA/ Conclusion We have introduced a novel Bayesian model and demonstrated that it is a robust method for analysing microarray expression profiles. ArrayLeaRNA showed a considerable improvement in data normalization, in the estimation of the experimental variability intrinsic to each hybridization and in the establishment of a clear boundary between non-changing and differentially expressed genes. The method is applicable to data derived from hybridizations of labelled cDNA samples as well as from hybridizations of labelled cDNA with genomic DNA and can be used for the analysis of datasets where
Two-layer interfacial flows beyond the Boussinesq approximation: a Hamiltonian approach
Camassa, R; Ortenzi, G
2015-01-01
The theory of integrable systems of Hamiltonian PDEs and their near-integrable deformations is used to study evolution equations resulting from vertical-averages of the Euler system for two-layer stratified flows in an infinite 2D channel. The Hamiltonian structure of the averaged equations is obtained directly from that of the Euler equations through the process of Hamiltonian reduction. Long-wave asymptotics together with the Boussinesq approximation of neglecting the fluids' inertia is then applied to reduce the leading order vertically averaged equations to the shallow-water Airy system, and thence, in a non-trivial way, to the dispersionless non-linear Schr\\"odinger equation. The full non-Boussinesq system for the dispersionless limit can then be viewed as a deformation of this well known equation. In a perturbative study of this deformation, it is shown that at first order the deformed system possesses an infinite sequence of constants of the motion, thus casting this system within the framework of comp...
Unsteady Hartmann Two-Phase Flow: The Riemann-Sum Approximation Approach
Jha, B. K.; Babila, C. T.; Isa, S.
2016-12-01
We consider the time dependent Hartmann flow of a conducting fluid in a channel formed by two horizontal parallel plates of infinite extent, there being a layer of a non-conducting fluid between the conducting fluid and the upper channel wall. The flow formation of conducting and non-conducting fluids is coupled by equating the velocity and shear stress at the interface. The unsteady flow formation inside the channel is caused by a sudden change in the pressure gradient. The relevant partial differential equations capturing the present physical situation are transformed into ordinary differential equations using the Laplace transform technique. The ordinary differential equations are then solved analytically and the Riemann-sum approximation method is used to invert the Laplace domain into time domain. The solution obtained is validated by comparisons with the closed form solutions obtained for steady states which have been derived separately and also by the implicit finite difference method. The variation of velocity, mass flow rate and skin-friction on both plates for various physical parameters involved in the problem are reported and discussed with the help of line graphs. It was found that the effect of changes of the electric load parameter is to aid or oppose the flow as compared to the short-circuited case.
Jennings, Elise; Sako, Masao
2016-01-01
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set of $\\sim$1000 SNe corresponding to the first season of the Dark En...
Tsai, Chen-An; Huang, Chih-Yang; Liu, Jen-Pei
2014-08-30
The approval of generic drugs requires the evidence of average bioequivalence (ABE) on both the area under the concentration-time curve and the peak concentration Cmax . The bioequivalence (BE) hypothesis can be decomposed into the non-inferiority (NI) and non-superiority (NS) hypothesis. Most of regulatory agencies employ the two one-sided tests (TOST) procedure to test ABE between two formulations. As it is based on the intersection-union principle, the TOST procedure is conservative in terms of the type I error rate. However, the type II error rate is the sum of the type II error rates with respect to each null hypothesis of NI and NS hypotheses. When the difference in population means between two treatments is not 0, no close-form solution for the sample size for the BE hypothesis is available. Current methods provide the sample sizes with either insufficient power or unnecessarily excessive power. We suggest an approximate method for sample size determination, which can also provide the type II rate for each of NI and NS hypotheses. In addition, the proposed method is flexible to allow extension from one pharmacokinetic (PK) response to determination of the sample size required for multiple PK responses. We report the results of a numerical study. An R code is provided to calculate the sample size for BE testing based on the proposed methods.
Viral discovery and sequence recovery using DNA microarrays.
David Wang
2003-11-01
Full Text Available Because of the constant threat posed by emerging infectious diseases and the limitations of existing approaches used to identify new pathogens, there is a great demand for new technological methods for viral discovery. We describe herein a DNA microarray-based platform for novel virus identification and characterization. Central to this approach was a DNA microarray designed to detect a wide range of known viruses as well as novel members of existing viral families; this microarray contained the most highly conserved 70mer sequences from every fully sequenced reference viral genome in GenBank. During an outbreak of severe acute respiratory syndrome (SARS in March 2003, hybridization to this microarray revealed the presence of a previously uncharacterized coronavirus in a viral isolate cultivated from a SARS patient. To further characterize this new virus, approximately 1 kb of the unknown virus genome was cloned by physically recovering viral sequences hybridized to individual array elements. Sequencing of these fragments confirmed that the virus was indeed a new member of the coronavirus family. This combination of array hybridization followed by direct viral sequence recovery should prove to be a general strategy for the rapid identification and characterization of novel viruses and emerging infectious disease.
Barik, Anwesha; Banerjee, Satarupa; Dhara, Santanu; Chakravorty, Nishant
2017-03-10
Complexities in the full genome expression studies hinder the extraction of tracker genes to analyze the course of biological events. In this study, we demonstrate the applications of supervised machine learning methods to reduce the irrelevance in microarray data series and thereby extract robust molecular markers to track biological processes. The methodology has been illustrated by analyzing whole genome expression studies on bone-implant integration (ossointegration). Being a biological process, osseointegration is known to leave a trail of genetic footprint during the course. In spite of existence of enormous amount of raw data in public repositories, researchers still do not have access to a panel of genes that can definitively track osseointegrtion. The results from our study revealed panels comprising of matrix metalloproteinases and collagen genes were able to track osseointegration on implant surfaces (MMP9 and COL1A2 on micro-textured; MMP12 and COL6A3 on superimposed nano-textured surfaces) 100% classification accuracy, specificity and sensitivity. Further, our analysis showed the importance of the progression of the duration in establishment of the mechanical connection at bone-implant surface. The findings from this study are expected to be useful to researchers investigating osseointegration of novel implant materials especially at the early stage. The methodology demonstrated can be easily adapted by scientists in different fields to analyze large databases for other biological processes.
Boehn Susanne NE
2008-12-01
Full Text Available Abstract Background MicroRNAs (miRNAs play key roles in mammalian gene expression and several cellular processes, including differentiation, development, apoptosis and cancer pathomechanisms. Recently the biological importance of primary cilia has been recognized in a number of human genetic diseases. Numerous disorders are related to cilia dysfunction, including polycystic kidney disease (PKD. Although involvement of certain genes and transcriptional networks in PKD development has been shown, not much is known how they are regulated molecularly. Results Given the emerging role of miRNAs in gene expression, we explored the possibilities of miRNA-based regulations in PKD. Here, we analyzed the simultaneous expression changes of miRNAs and mRNAs by microarrays. 935 genes, classified into 24 functional categories, were differentially regulated between PKD and control animals. In parallel, 30 miRNAs were differentially regulated in PKD rats: our results suggest that several miRNAs might be involved in regulating genetic switches in PKD. Furthermore, we describe some newly detected miRNAs, miR-31 and miR-217, in the kidney which have not been reported previously. We determine functionally related gene sets, or pathways to reveal the functional correlation between differentially expressed mRNAs and miRNAs. Conclusion We find that the functional patterns of predicted miRNA targets and differentially expressed mRNAs are similar. Our results suggest an important role of miRNAs in specific pathways underlying PKD.
Lifan Luo
Full Text Available BACKGROUND: MicroRNAs (miRNAs are short non-coding RNA molecules which are proved to be involved in mammalian spermatogenesis. Their expression and function in the porcine germ cells are not fully understood. METHODOLOGY: We employed a miRNA microarray containing 1260 unique miRNA probes to evaluate the miRNA expression patterns between sexually immature (60-day and mature (180-day pig testes. One hundred and twenty nine miRNAs representing 164 reporter miRNAs were expressed differently (p<0.1. Fifty one miRNAs were significantly up-regulated and 78 miRNAs were down-regulated in mature testes. Nine of these differentially expressed miRNAs were validated using quantitative RT-PCR assay. Totally 15,919 putative miRNA-target sites were detected by using RNA22 method to align 445 NCBI pig cDNA sequences with these 129 differentially expressed miRNAs, and seven putative target genes involved in spermatogenesis including DAZL, RNF4 gene were simply confirmed by quantitative RT-PCR. CONCLUSIONS: Overall, the results of this study indicated specific miRNAs expression in porcine testes and suggested that miRNAs had a role in regulating spermatogenesis.
Approximations for modelling CO chemistry in GMCs: a comparison of approaches
Glover, S C O
2011-01-01
We examine several different simplified approaches for modelling the chemistry of CO in three-dimensional numerical simulations of turbulent molecular clouds. We compare the different models both by looking at the behaviour of integrated quantities such as the mean CO fraction or the cloud-averaged CO-to-H2 conversion factor, and also by studying the detailed distribution of CO as a function of gas density and visual extinction. In addition, we examine the extent to which the density and temperature distributions depend on our choice of chemical model. We find that the two most complex models that we examine in this study, taken from work by Nelson & Langer (1999) and Glover et al. (2010), produce very similar results in all of our comparisons. However, the Nelson & Langer model is roughly a factor of three faster than the Glover et al. model, and thus will be the better choice for many applications. The simpler models examined in this study are even faster than the Nelson & Langer (1999) model, b...
Microarrays, Integrated Analytical Systems
Combinatorial chemistry is used to find materials that form sensor microarrays. This book discusses the fundamentals, and then proceeds to the many applications of microarrays, from measuring gene expression (DNA microarrays) to protein-protein interactions, peptide chemistry, carbodhydrate chemistry, electrochemical detection, and microfluidics.
Ohlberger, Mario; Smetana, Kathrin
2016-09-01
In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.
Nedorezov, L V
2015-01-01
For approximation of some well-known time series of Paramecia caudatun population dynamics (G. F. Gause, The Struggle for Existence, 1934) Verhulst and Gompertz models were used. The parameters were estimated for each of the models in two different ways: with the least squares method (global fitting) and non-traditional approach (a method of extreme points). The results obtained were compared and also with those represented by G. F. Gause. Deviations of theoretical (model) trajectories from experimental time series were tested using various non-parametric statistical tests. It was shown that the least square method-estimations lead to the results which not always meet the requirements imposed for a "fine" model. But in some cases a small modification of the least square method-estimations is possible allowing for satisfactory representations of experimental data set for approximation.
Christian Kohler
2016-07-01
Full Text Available The environmental bacterium Burkholderia pseudomallei causes the infectious disease melioidosis with a high case-fatality rate in tropical and subtropical regions. Direct pathogen detection can be difficult, and therefore an indirect serological test which might aid early diagnosis is desirable. However, current tests for antibodies against B. pseudomallei, including the reference indirect haemagglutination assay (IHA, lack sensitivity, specificity and standardization. Consequently, serological tests currently do not play a role in the diagnosis of melioidosis in endemic areas. Recently, a number of promising diagnostic antigens have been identified, but a standardized, easy-to-perform clinical laboratory test for sensitive multiplex detection of antibodies against B. pseudomallei is still lacking.In this study, we developed and validated a protein microarray which can be used in a standard 96-well format. Our array contains 20 recombinant and purified B. pseudomallei proteins, previously identified as serodiagnostic candidates in melioidosis. In total, we analyzed 196 sera and plasmas from melioidosis patients from northeast Thailand and 210 negative controls from melioidosis-endemic and non-endemic regions. Our protein array clearly discriminated between sera from melioidosis patients and controls with a specificity of 97%. Importantly, the array showed a higher sensitivity than did the IHA in melioidosis patients upon admission (cut-off IHA titer ≥1:160: IHA 57.3%, protein array: 86.7%; p = 0.0001. Testing of sera from single patients at 0, 12 and 52 weeks post-admission revealed that protein antigens induce either a short- or long-term antibody response.Our protein array provides a standardized, rapid, easy-to-perform test for the detection of B. pseudomallei-specific antibody patterns. Thus, this system has the potential to improve the serodiagnosis of melioidosis in clinical settings. Moreover, our high-throughput assay might be useful
Amin, Talha
2013-01-01
In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.
Boyer F.
2013-01-01
This article deals with the problem of computing numerical approximations of null-controls for parabolic equations or systems by using the Hilbert Uniqueness Method (HUM). We mainly review recent results on this subject but we also provide new elements to emphasize the main ideas underlying the penalised HUM approach which is at the heart of the methods used in practice. We give many numerical illustrations. Cet article est consacré à l’étude du problème du calcul approché de contrôles...
Sago, Norichika; Nakano, Hiroyuki
2016-01-01
We revisit the accuracy of the post-Newtonian (PN) approximation and its region of validity for quasi-circular orbits of a point particle in the Kerr spacetime, by using an analytically known highest post-Newtonian order gravitational energy flux and accurate numerical results in the black hole perturbation approach. It is found that regions of validity become larger for higher PN order results although there are several local maximums in regions of validity for relatively low-PN order results. This might imply that higher PN order calculations are also encouraged for comparable-mass binaries.
Yasser Abduallah
2017-01-01
Full Text Available Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs. Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
Abduallah, Yasser; Turki, Turki; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel; Wang, Jason T L
2017-01-01
Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.
Castelletti, A.; Pianosi, F.; Restelli, M.
2013-06-01
The operation of large-scale water resources systems often involves several conflicting and noncommensurable objectives. The full characterization of tradeoffs among them is a necessary step to inform and support decisions in the absence of a unique optimal solution. In this context, the common approach is to consider many single objective problems, resulting from different combinations of the original problem objectives, each one solved using standard optimization methods based on mathematical programming. This scalarization process is computationally very demanding as it requires one optimization run for each trade-off and often results in very sparse and poorly informative representations of the Pareto frontier. More recently, bio-inspired methods have been applied to compute an approximation of the Pareto frontier in one single run. These methods allow to acceptably cover the full extent of the Pareto frontier with a reasonable computational effort. Yet, the quality of the policy obtained might be strongly dependent on the algorithm tuning and preconditioning. In this paper we propose a novel multiobjective Reinforcement Learning algorithm that combines the advantages of the above two approaches and alleviates some of their drawbacks. The proposed algorithm is an extension of fitted Q-iteration (FQI) that enables to learn the operating policies for all the linear combinations of preferences (weights) assigned to the objectives in a single training process. The key idea of multiobjective FQI (MOFQI) is to enlarge the continuous approximation of the value function, that is performed by single objective FQI over the state-decision space, also to the weight space. The approach is demonstrated on a real-world case study concerning the optimal operation of the HoaBinh reservoir on the Da river, Vietnam. MOFQI is compared with the reiterated use of FQI and a multiobjective parameterization-simulation-optimization (MOPSO) approach. Results show that MOFQI provides a
In control: systematic assessment of microarray performance.
van Bakel, Harm; Holstege, Frank C P
2004-10-01
Expression profiling using DNA microarrays is a powerful technique that is widely used in the life sciences. How reliable are microarray-derived measurements? The assessment of performance is challenging because of the complicated nature of microarray experiments and the many different technology platforms. There is a mounting call for standards to be introduced, and this review addresses some of the issues that are involved. Two important characteristics of performance are accuracy and precision. The assessment of these factors can be either for the purpose of technology optimization or for the evaluation of individual microarray hybridizations. Microarray performance has been evaluated by at least four approaches in the past. Here, we argue that external RNA controls offer the most versatile system for determining performance and describe how such standards could be implemented. Other uses of external controls are discussed, along with the importance of probe sequence availability and the quantification of labelled material.
The EADGENE Microarray Data Analysis Workshop
de Koning, Dirk-Jan; Jaffrézic, Florence; Lund, Mogens Sandø
2007-01-01
Microarray analyses have become an important tool in animal genomics. While their use is becoming widespread, there is still a lot of ongoing research regarding the analysis of microarray data. In the context of a European Network of Excellence, 31 researchers representing 14 research groups from...... 10 countries performed and discussed the statistical analyses of real and simulated 2-colour microarray data that were distributed among participants. The real data consisted of 48 microarrays from a disease challenge experiment in dairy cattle, while the simulated data consisted of 10 microarrays...... statistical weights, to omitting a large number of spots or omitting entire slides. Surprisingly, these very different approaches gave quite similar results when applied to the simulated data, although not all participating groups analysed both real and simulated data. The workshop was very successful...
Introduction to microarray technology.
Dufva, Martin
2009-01-01
DNA microarrays can be used for large number of application where high-throughput is needed. The ability to probe a sample for hundred to million different molecules at once has made DNA microarray one of the fastest growing techniques since its introduction about 15 years ago. Microarray technology can be used for large scale genotyping, gene expression profiling, comparative genomic hybridization and resequencing among other applications. Microarray technology is a complex mixture of numerous technology and research fields such as mechanics, microfabrication, chemistry, DNA behaviour, microfluidics, enzymology, optics and bioinformatics. This chapter will give an introduction to each five basic steps in microarray technology that includes fabrication, target preparation, hybridization, detection and data analysis. Basic concepts and nomenclature used in the field of microarray technology and their relationships will also be explained.
Motohide Hori
2016-06-01
Full Text Available Lavender oil (LO is a commonly used essential oil in aromatherapy as non-traditional medicine. With an aim to demonstrate LO effects on the body, we have recently established an animal model investigating the influence of orally administered LO in rat tissues, genome-wide. In this brief, we investigate the effect of LO ingestion in the blood of rat. Rats were administered LO at usual therapeutic dose (5 mg/kg in humans, and following collection of the venous blood from the heart and extraction of total RNA, the differentially expressed genes were screened using a 4 × 44-K whole-genome rat chip (Agilent microarray platform; Agilent Technologies, Palo Alto, CA, USA in conjunction with a two-color dye-swap approach. A total of 834 differentially expressed genes in the blood were identified: 362 up-regulated and 472 down-regulated. These genes were functionally categorized using bioinformatics tools. The gene expression inventory of rat blood transcriptome under LO, a first report, has been deposited into the Gene Expression Omnibus (GEO: GSE67499. The data will be a valuable resource in examining the effects of natural products, and which could also serve as a human model for further functional analysis and investigation.
Santopolo, L; Marchi, E; Frediani, L; Decorosi, F; Viti, C; Giovannetti, L
2012-01-01
A rapid method for screening the metabolic susceptibility of biofilms to toxic compounds was developed by combining the Calgary Biofilm Device (MBEC device) and Phenotype MicroArray (PM) technology. The method was developed using Pseudomonas alcaliphila 34, a Cr(VI)-hyper-resistant bacterium, as the test organism. P. alcaliphila produced a robust biofilm after incubation for 16 h, reaching the maximum value after incubation for 24 h (9.4 × 10(6) ± 3.3 × 10(6) CFU peg(-1)). In order to detect the metabolic activity of cells in the biofilm, dye E (5×) and menadione sodium bisulphate (100 μM) were selected for redox detection chemistry, because they produced a high colorimetric yield in response to bacterial metabolism (340.4 ± 6.9 Omnilog Arbitrary Units). This combined approach, which avoids the limitations of traditional plate counts, was validated by testing the susceptibility of P. alcaliphila biofilm to 22 toxic compounds. For each compound the concentration level that significantly lowered the metabolic activity of the biofilm was identified. Chemical sensitivity analysis of the planktonic culture was also performed, allowing comparison of the metabolic susceptibility patterns of biofilm and planktonic cultures.
Andrei Halanay
2013-07-01
Full Text Available In the present paper, we use a penalization of the Stokes equationin order to obtain approximate solutions in a larger domain includingthe domain occupied by the structure. The coefficients of the fluid problem, excepting the penalizing term, are constant and independent of the deformation of the structure, which represents an advantage of this approach. Subtracting the structure equations from the fictitious fluid equations in the structure domain and using the Green's formula, we obtain a weak formulation where the continuity of the stress at the interface does not appear explicitly. This is a second advantage of this model, because the computation of the stress at the fluid-structure interface is not easy from the theoretical point of view as well as for the numerical approximation. This problem is a free boundary problem and a fundamental difficulty is to find the free interface between the fluid and the structure, which is unknown and has to be identified together with the solution of the given system of equations.
Purkayastha, Archak; Dhar, Abhishek; Kulkarni, Manas
2016-06-01
We present the Born-Markov approximated Redfield quantum master equation (RQME) description for an open system of noninteracting particles (bosons or fermions) on an arbitrary lattice of N sites in any dimension and weakly connected to multiple reservoirs at different temperatures and chemical potentials. The RQME can be reduced to the Lindblad equation, of various forms, by making further approximations. By studying the N =2 case, we show that RQME gives results which agree with exact analytical results for steady-state properties and with exact numerics for time-dependent properties over a wide range of parameters. In comparison, the Lindblad equations have a limited domain of validity in nonequilibrium. We conclude that it is indeed justified to use microscopically derived full RQME to go beyond the limitations of Lindblad equations in out-of-equilibrium systems. We also derive closed-form analytical results for out-of-equilibrium time dynamics of two-point correlation functions. These results explicitly show the approach to steady state and thermalization. These results are experimentally relevant for cold atoms, cavity QED, and far-from-equilibrium quantum dot experiments.
Thakare SP
2012-11-01
Full Text Available DNA Microarray is the emerging technique in Biotechnology. The many varieties of DNA microarray or DNA chip devices and systems are described along with their methods for fabrication and their use. It also includes screening and diagnostic applications. The DNA microarray hybridization applications include the important areas of gene expression analysis and genotyping for point mutations, single nucleotide polymorphisms (SNPs, and short tandem repeats (STRs. In addition to the many molecular biological and genomic research uses, this review covers applications of microarray devices and systems for pharmacogenomic research and drug discovery, infectious and genetic disease and cancer diagnostics, and forensic and genetic identification purposes.
Wayne E Clarke
Full Text Available Targeted genomic selection methodologies, or sequence capture, allow for DNA enrichment and large-scale resequencing and characterization of natural genetic variation in species with complex genomes, such as rapeseed canola (Brassica napus L., AACC, 2n=38. The main goal of this project was to combine sequence capture with next generation sequencing (NGS to discover single nucleotide polymorphisms (SNPs in specific areas of the B. napus genome historically associated (via quantitative trait loci -QTL- analysis to traits of agronomical and nutritional importance. A 2.1 million feature sequence capture platform was designed to interrogate DNA sequence variation across 47 specific genomic regions, representing 51.2 Mb of the Brassica A and C genomes, in ten diverse rapeseed genotypes. All ten genotypes were sequenced using the 454 Life Sciences chemistry and to assess the effect of increased sequence depth, two genotypes were also sequenced using Illumina HiSeq chemistry. As a result, 589,367 potentially useful SNPs were identified. Analysis of sequence coverage indicated a four-fold increased representation of target regions, with 57% of the filtered SNPs falling within these regions. Sixty percent of discovered SNPs corresponded to transitions while 40% were transversions. Interestingly, fifty eight percent of the SNPs were found in genic regions while 42% were found in intergenic regions. Further, a high percentage of genic SNPs was found in exons (65% and 64% for the A and C genomes, respectively. Two different genotyping assays were used to validate the discovered SNPs. Validation rates ranged from 61.5% to 84% of tested SNPs, underpinning the effectiveness of this SNP discovery approach. Most importantly, the discovered SNPs were associated with agronomically important regions of the B. napus genome generating a novel data resource for research and breeding this crop species.
Laser-based patterning for transfected cell microarrays
Hook, Andrew L; Creasey, Rhiannon; Voelcker, Nicolas H [Flinders University, GPO Box 2100, Bedford Park, SA 5042 (Australia); Hayes, Jason P [MiniFAB, 1 Dalmore Drive, Caribbean Park, Scoresby VIC 3179 (Australia); Thissen, Helmut, E-mail: Nico.Voelcker@flinders.edu.a [CSIRO Molecular and Health Technologies, Bayview Avenue, Clayton VIC 3168 (Australia)
2009-12-15
The spatial control over biomolecule- and cell-surface interactions is of great interest to a broad range of biomedical applications, including sensors, implantable devices and cell microarrays. Microarrays in particular require precise spatial control and the formation of patterns with microscale features. Here, we have developed an approach specifically designed for transfected cell microarray (TCM) applications that allows microscale spatial control over the location of both DNA and cells on highly doped p-type silicon substrates. This was achieved by surface modification, involving plasma polymerization of allylamine, grafting of poly(ethylene glycol) and subsequent excimer laser ablation. DNA could be delivered in a spatially defined manner using ink-jet printing. In addition, electroporation was investigated as an approach to transfect attached cells with adsorbed DNA and good transfection efficiencies of approximately 20% were observed. The ability of the microstructured surfaces to spatially direct both DNA adsorption and cell attachment was demonstrated in a functional TCM, making this system an exciting platform for chip-based functional genomics.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
MARS: Microarray analysis, retrieval, and storage system
Scheideler Marcel
2005-04-01
Full Text Available Abstract Background Microarray analysis has become a widely used technique for the study of gene-expression patterns on a genomic scale. As more and more laboratories are adopting microarray technology, there is a need for powerful and easy to use microarray databases facilitating array fabrication, labeling, hybridization, and data analysis. The wealth of data generated by this high throughput approach renders adequate database and analysis tools crucial for the pursuit of insights into the transcriptomic behavior of cells. Results MARS (Microarray Analysis and Retrieval System provides a comprehensive MIAME supportive suite for storing, retrieving, and analyzing multi color microarray data. The system comprises a laboratory information management system (LIMS, a quality control management, as well as a sophisticated user management system. MARS is fully integrated into an analytical pipeline of microarray image analysis, normalization, gene expression clustering, and mapping of gene expression data onto biological pathways. The incorporation of ontologies and the use of MAGE-ML enables an export of studies stored in MARS to public repositories and other databases accepting these documents. Conclusion We have developed an integrated system tailored to serve the specific needs of microarray based research projects using a unique fusion of Web based and standalone applications connected to the latest J2EE application server technology. The presented system is freely available for academic and non-profit institutions. More information can be found at http://genome.tugraz.at.
Boyer F.
2013-12-01
Full Text Available This article deals with the problem of computing numerical approximations of null-controls for parabolic equations or systems by using the Hilbert Uniqueness Method (HUM. We mainly review recent results on this subject but we also provide new elements to emphasize the main ideas underlying the penalised HUM approach which is at the heart of the methods used in practice. We give many numerical illustrations. Cet article est consacré à l’étude du problème du calcul approché de contrôles à zéro pour des équations ou des systèmes paraboliques par le biais de la méthode HUM (Hilbert Uniqueness Method. On donne un aperçu des résultats récents sur le sujet tout en mettant en lumière certains aspects fondamentaux de la méthode HUM pénalisée, qui se trouve au coeur des algorithmes proposés. De nombreuses illustrations numériques sont également données.
Microarray Analysis in Glioblastomas
Bhawe, Kaumudi M.; Aghi, Manish K.
2016-01-01
Microarray analysis in glioblastomas is done using either cell lines or patient samples as starting material. A survey of the current literature points to transcript-based microarrays and immunohistochemistry (IHC)-based tissue microarrays as being the preferred methods of choice in cancers of neurological origin. Microarray analysis may be carried out for various purposes including the following: To correlate gene expression signatures of glioblastoma cell lines or tumors with response to chemotherapy (DeLay et al., Clin Cancer Res 18(10):2930–2942, 2012)To correlate gene expression patterns with biological features like proliferation or invasiveness of the glioblastoma cells (Jiang et al., PLoS One 8(6):e66008, 2013)To discover new tumor classificatory systems based on gene expression signature, and to correlate therapeutic response and prognosis with these signatures (Huse et al., Annu Rev Med 64(1):59–70, 2013; Verhaak et al., Cancer Cell 17(1):98–110, 2010) While investigators can sometimes use archived tumor gene expression data available from repositories such as the NCBI Gene Expression Omnibus to answer their questions, new arrays must often be run to adequately answer specific questions. Here, we provide a detailed description of microarray methodologies, how to select the appropriate methodology for a given question, and analytical strategies that can be used. Experimental methodology for protein microarrays is outside the scope of this chapter, but basic sample preparation techniques for transcript-based microarrays are included here. PMID:26113463
A New Distribution Family for Microarray Data †
Kelmansky, Diana Mabel; Ricci, Lila
2017-01-01
The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative standpoint taken here is to search for models that fit the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. R codes are available from the authors upon request. PMID:28208652
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
Hallowed Olaoluwa
2015-01-01
Full Text Available In this research work, some results on the existence and approximation of common coupled fixed points of contractive maps in cone metric spaces are unified and generalized based on a new method.
Microarray Inspector: tissue cross contamination detection tool for microarray data.
Stępniak, Piotr; Maycock, Matthew; Wojdan, Konrad; Markowska, Monika; Perun, Serhiy; Srivastava, Aashish; Wyrwicz, Lucjan S; Świrski, Konrad
2013-01-01
Microarray technology changed the landscape of contemporary life sciences by providing vast amounts of expression data. Researchers are building up repositories of experiment results with various conditions and samples which serve the scientific community as a precious resource. Ensuring that the sample is of high quality is of utmost importance to this effort. The task is complicated by the fact that in many cases datasets lack information concerning pre-experimental quality assessment. Transcription profiling of tissue samples may be invalidated by an error caused by heterogeneity of the material. The risk of tissue cross contamination is especially high in oncological studies, where it is often difficult to extract the sample. Therefore, there is a need of developing a method detecting tissue contamination in a post-experimental phase. We propose Microarray Inspector: customizable, user-friendly software that enables easy detection of samples containing mixed tissue types. The advantage of the tool is that it uses raw expression data files and analyses each array independently. In addition, the system allows the user to adjust the criteria of the analysis to conform to individual needs and research requirements. The final output of the program contains comfortable to read reports about tissue contamination assessment with detailed information about the test parameters and results. Microarray Inspector provides a list of contaminant biomarkers needed in the analysis of adipose tissue contamination. Using real data (datasets from public repositories) and our tool, we confirmed high specificity of the software in detecting contamination. The results indicated the presence of adipose tissue admixture in a range from approximately 4% to 13% in several tested surgical samples.
The Impact of Photobleaching on Microarray Analysis
Marcel von der Haar
2015-09-01
Full Text Available DNA-Microarrays have become a potent technology for high-throughput analysis of genetic regulation. However, the wide dynamic range of signal intensities of fluorophore-based microarrays exceeds the dynamic range of a single array scan by far, thus limiting the key benefit of microarray technology: parallelization. The implementation of multi-scan techniques represents a promising approach to overcome these limitations. These techniques are, in turn, limited by the fluorophores’ susceptibility to photobleaching when exposed to the scanner’s laser light. In this paper the photobleaching characteristics of cyanine-3 and cyanine-5 as part of solid state DNA microarrays are studied. The effects of initial fluorophore intensity as well as laser scanner dependent variables such as the photomultiplier tube’s voltage on bleaching and imaging are investigated. The resulting data is used to develop a model capable of simulating the expected degree of signal intensity reduction caused by photobleaching for each fluorophore individually, allowing for the removal of photobleaching-induced, systematic bias in multi-scan procedures. Single-scan applications also benefit as they rely on pre-scans to determine the optimal scanner settings. These findings constitute a step towards standardization of microarray experiments and analysis and may help to increase the lab-to-lab comparability of microarray experiment results.
Carter, Mark G.; Hamatani, Toshio; Sharov, Alexei A; Carmack, Condie E; Qian, Yong; Aiba, Kazuhiro; Ko, Naomi T.; Dudekula, Dawood B.; Brzoska, Pius M.; Hwang, S. Stuart; Minoru S.H. Ko
2003-01-01
Applications of microarray technologies to mouse embryology/genetics have been limited, due to the nonavailability of microarrays containing large numbers of embryonic genes and the gap between microgram quantities of RNA required by typical microarray methods and the miniscule amounts of tissue available to researchers. To overcome these problems, we have developed a microarray platform containing in situ-synthesized 60-mer oligonucleotide probes representing approximately 22,000 unique mous...
Jiahuan Wu; Jianlin Wang; Tao Yu; Liqiang Zhao
2014-01-01
The approaches to discrete approximation of Pareto front using multi-objective evolutionary algorithms have the problems of heavy computation burden, long running time and missing Pareto optimal points. In order to overcome these problems, an approach to continuous approximation of Pareto front using geometric support vector regression is presented. The regression model of the small size approximate discrete Pareto front is constructed by geometric support vector regression modeling and is described as the approximate continuous Pareto front. In the process of geometric support vector regression modeling, considering the distribution characteristic of Pareto optimal points, the separable augmented training sample sets are constructed by shifting original training sample points along multiple coordinated axes. Besides, an interactive decision-making (DM) procedure, in which the continuous approximation of Pareto front and decision-making is performed interactive-ly, is designed for improving the accuracy of the preferred Pareto optimal point. The correctness of the continuous approximation of Pareto front is demonstrated with a typical multi-objective optimization problem. In addition, combined with the interactive decision-making procedure, the continuous approximation of Pareto front is applied in the multi-objective optimization for an industrial fed-batch yeast fermentation process. The experi-mental results show that the generated approximate continuous Pareto front has good accuracy and complete-ness. Compared with the multi-objective evolutionary algorithm with large size population, a more accurate preferred Pareto optimal point can be obtained from the approximate continuous Pareto front with less compu-tation and shorter running time. The operation strategy corresponding to the final preferred Pareto optimal point generated by the interactive DM procedure can improve the production indexes of the fermentation process effectively.
Thomas, D. Roland; Decady, Yves J.
2004-01-01
This article discusses approximate tests of marginal association in 2-way tables in which one or both response variables admit multiple responses. Although multiple-response questions appear in all fields of research, including sociology, education, and marketing, the development of association tests that can be used with multiple-response data is…
Maize microarray annotation database
Berger Dave K
2011-10-01
Full Text Available Abstract Background Microarray technology has matured over the past fifteen years into a cost-effective solution with established data analysis protocols for global gene expression profiling. The Agilent-016047 maize 44 K microarray was custom-designed from EST sequences, but only reporter sequences with EST accession numbers are publicly available. The following information is lacking: (a reporter - gene model match, (b number of reporters per gene model, (c potential for cross hybridization, (d sense/antisense orientation of reporters, (e position of reporter on B73 genome sequence (for eQTL studies, and (f functional annotations of genes represented by reporters. To address this, we developed a strategy to annotate the Agilent-016047 maize microarray, and built a publicly accessible annotation database. Description Genomic annotation of the 42,034 reporters on the Agilent-016047 maize microarray was based on BLASTN results of the 60-mer reporter sequences and their corresponding ESTs against the maize B73 RefGen v2 "Working Gene Set" (WGS predicted transcripts and the genome sequence. The agreement between the EST, WGS transcript and gDNA BLASTN results were used to assign the reporters into six genomic annotation groups. These annotation groups were: (i "annotation by sense gene model" (23,668 reporters, (ii "annotation by antisense gene model" (4,330; (iii "annotation by gDNA" without a WGS transcript hit (1,549; (iv "annotation by EST", in which case the EST from which the reporter was designed, but not the reporter itself, has a WGS transcript hit (3,390; (v "ambiguous annotation" (2,608; and (vi "inconclusive annotation" (6,489. Functional annotations of reporters were obtained by BLASTX and Blast2GO analysis of corresponding WGS transcripts against GenBank. The annotations are available in the Maize Microarray Annotation Database http://MaizeArrayAnnot.bi.up.ac.za/, as well as through a GBrowse annotation file that can be uploaded to
Champagnat, Nicolas; Faou, Erwan
2010-01-01
We propose extensions and improvements of the statistical analysis of distributed multipoles (SADM) algorithm put forth by Chipot et al. in [6] for the derivation of distributed atomic multipoles from the quantum-mechanical electrostatic potential. The method is mathematically extended to general least-squares problems and provides an alternative approximation method in cases where the original least-squares problem is computationally not tractable, either because of its ill-posedness or its high-dimensionality. The solution is approximated employing a Monte Carlo method that takes the average of a random variable defined as the solutions of random small least-squares problems drawn as subsystems of the original problem. The conditions that ensure convergence and consistency of the method are discussed, along with an analysis of the computational cost in specific instances.
Oliver Chikumbo
2012-01-01
Full Text Available A stand-level, multiobjective evolutionary algorithm (MOEA for determining a set of efficient thinning regimes satisfying two objectives, that is, value production for sawlog harvesting and volume production for a pulpwood market, was successfully demonstrated for a Eucalyptus fastigata trial in Kaingaroa Forest, New Zealand. The MOEA approximated the set of efficient thinning regimes (with a discontinuous Pareto front by employing a ranking scheme developed by Fonseca and Fleming (1993, which was a Pareto-based ranking (a.k.a Multiobjective Genetic Algorithm—MOGA. In this paper we solve the same problem using an improved version of a fitness sharing Pareto ranking algorithm (a.k.a Nondominated Sorting Genetic Algorithm—NSGA II originally developed by Srinivas and Deb (1994 and examine the results. Our findings indicate that NSGA II approximates the entire Pareto front whereas MOGA only determines a subdomain of the Pareto points.
Wurm, Patrick; Ulz, Manfred H.
2016-10-01
The aim of this work is to provide an improved information exchange in hierarchical atomistic-to-continuum settings by applying stochastic approximation methods. For this purpose a typical model belonging to this class is chosen and enhanced. On the macroscale of this particular two-scale model, the balance equations of continuum mechanics are solved using a nonlinear finite element formulation. The microscale, on which a canonical ensemble of statistical mechanics is simulated using molecular dynamics, replaces a classic material formulation. The constitutive behavior is computed on the microscale by computing time averages. However, these time averages are thermal noise-corrupted as the microscale may practically not be tracked for a sufficiently long period of time due to limited computational resources. This noise prevents the model from a classical convergence behavior and creates a setting that shows remarkable resemblance to iteration schemes known from stochastic approximation. This resemblance justifies the use of two averaging strategies known to improve the convergence behavior in stochastic approximation schemes under certain, fairly general, conditions. To demonstrate the effectiveness of the proposed strategies, three numerical examples are studied.
Christina Pfeiffer
2015-09-01
Full Text Available Multivariate genetic evaluation in modern dairy cattle breeding programs became important in the last decades. The simultaneous estimation of all production and functional traits is still demanding. Different meta-models are used to overcome several constraints. The aim of this study was to conduct an approximate multivariate two-step procedure applied to de-regressed breeding values and yield deviations of five fertility traits of Austrian Pinzgau cattle and to compare results with routinely estimated breeding values. The approximate two-step procedure applied to de-regressed breeding values performed better than the procedure applied to yield deviations. Spearman rank correlations for all animals, sires and cows were between 0.996 and 0.999 for the procedure applied to de-regressed breeding values and between 0.866 and 0.995 for the procedure applied to yield deviations. The results are encouraging to move from the currently used selection index in routine genetic evaluation towards an approximate two-step procedure applied to de-regressed breeding values.
Protein microarrays for systems biology
Lina Yang; Shujuan Guo; Yang Li; Shumin Zhou; Shengce Tao
2011-01-01
Systems biology holds the key for understanding biological systems on a system level. It eventually holds the key for the treatment and cure of complex diseases such as cancer,diabetes, obesity, mental disorders, and many others. The '-omics' technologies, such as genomics, transcriptomics,proteomics, and metabonomics, are among the major driving forces of systems biology. Featured as highthroughput, miniaturized, and capable of parallel analysis,protein microarrays have already become an important technology platform for systems biology, In this review, we will focus on the system level or global analysis of biological systems using protein microarrays. Four major types of protein microarrays will be discussed: proteome microarrays, antibody microarrays, reverse-phase protein arrays,and lectin microarrays. We will also discuss the challenges and future directions of protein microarray technologies and their applications for systems biology. We strongly believe that protein microarrays will soon become an indispensable and invaluable tool for systems biology.
Microarray Applications in Cancer Research
Kim, Il-Jin; Kang, Hio Chung
2004-01-01
DNA microarray technology permits simultaneous analysis of thousands of DNA sequences for genomic research and diagnostics applications. Microarray technology represents the most recent and exciting advance in the application of hybridization-based technology for biological sciences analysis. This review focuses on the classification (oligonucleotide vs. cDNA) and application (mutation-genotyping vs. gene expression) of microarrays. Oligonucleotide microarrays can be used both in mutation-genotyping and gene expression analysis, while cDNA microarrays can only be used in gene expression analysis. We review microarray mutation analysis, including examining the use of three oligonucleotide microarrays developed in our laboratory to determine mutations in RET, β-catenin and K-ras genes. We also discuss the use of the Affymetrix GeneChip in mutation analysis. We review microarray gene expression analysis, including the classifying of such studies into four categories: class comparison, class prediction, class discovery and identification of biomarkers. PMID:20368836
Rodríguez-Cruz, Maricela; Coral-Vázquez, Ramón M.; Hernández-Stengele, Gabriel; Sánchez, Raúl; Salazar, Emmanuel; Sanchez-Muñoz, Fausto; Encarnación-Guevara, Sergio; Ramírez-Salcedo, Jorge
2013-01-01
The mammary gland (MG) undergoes functional and metabolic changes during the transition from pregnancy to lactation, possibly by regulation of conserved genes. The objective was to elucidate orthologous genes, chromosome clusters and putative conserved transcriptional modules during MG development. We analyzed expression of 22,000 transcripts using murine microarrays and RNA samples of MG from virgin, pregnant, and lactating rats by cross-species hybridization. We identified 521 transcripts differentially expressed; upregulated in early (78%) and midpregnancy (89%) and early lactation (64%), but downregulated in mid-lactation (61%). Putative orthologous genes were identified. We mapped the altered genes to orthologous chromosomal locations in human and mouse. Eighteen sets of conserved genes associated with key cellular functions were revealed and conserved transcription factor binding site search entailed possible coregulation among all eight block sets of genes. This study demonstrates that the use of heterologous array hybridization for screening of orthologous gene expression from rat revealed sets of conserved genes arranged in chromosomal order implicated in signaling pathways and functional ontology. Results demonstrate the utilization power of comparative genomics and prove the feasibility of using rodent microarrays to identification of putative coexpressed orthologous genes involved in the control of human mammary gland development. PMID:24288657
Maricela Rodríguez-Cruz
2013-01-01
Full Text Available The mammary gland (MG undergoes functional and metabolic changes during the transition from pregnancy to lactation, possibly by regulation of conserved genes. The objective was to elucidate orthologous genes, chromosome clusters and putative conserved transcriptional modules during MG development. We analyzed expression of 22,000 transcripts using murine microarrays and RNA samples of MG from virgin, pregnant, and lactating rats by cross-species hybridization. We identified 521 transcripts differentially expressed; upregulated in early (78% and midpregnancy (89% and early lactation (64%, but downregulated in mid-lactation (61%. Putative orthologous genes were identified. We mapped the altered genes to orthologous chromosomal locations in human and mouse. Eighteen sets of conserved genes associated with key cellular functions were revealed and conserved transcription factor binding site search entailed possible coregulation among all eight block sets of genes. This study demonstrates that the use of heterologous array hybridization for screening of orthologous gene expression from rat revealed sets of conserved genes arranged in chromosomal order implicated in signaling pathways and functional ontology. Results demonstrate the utilization power of comparative genomics and prove the feasibility of using rodent microarrays to identification of putative coexpressed orthologous genes involved in the control of human mammary gland development.
Mandal, Monalisa; Mukhopadhyay, Anirban
2014-01-01
The purpose of feature selection is to identify the relevant and non-redundant features from a dataset. In this article, the feature selection problem is organized as a graph-theoretic problem where a feature-dissimilarity graph is shaped from the data matrix. The nodes represent features and the edges represent their dissimilarity. Both nodes and edges are given weight according to the feature's relevance and dissimilarity among the features, respectively. The problem of finding relevant and non-redundant features is then mapped into densest subgraph finding problem. We have proposed a multiobjective particle swarm optimization (PSO)-based algorithm that optimizes average node-weight and average edge-weight of the candidate subgraph simultaneously. The proposed algorithm is applied for identifying relevant and non-redundant disease-related genes from microarray gene expression data. The performance of the proposed method is compared with that of several other existing feature selection techniques on different real-life microarray gene expression datasets.
Monalisa Mandal
Full Text Available The purpose of feature selection is to identify the relevant and non-redundant features from a dataset. In this article, the feature selection problem is organized as a graph-theoretic problem where a feature-dissimilarity graph is shaped from the data matrix. The nodes represent features and the edges represent their dissimilarity. Both nodes and edges are given weight according to the feature's relevance and dissimilarity among the features, respectively. The problem of finding relevant and non-redundant features is then mapped into densest subgraph finding problem. We have proposed a multiobjective particle swarm optimization (PSO-based algorithm that optimizes average node-weight and average edge-weight of the candidate subgraph simultaneously. The proposed algorithm is applied for identifying relevant and non-redundant disease-related genes from microarray gene expression data. The performance of the proposed method is compared with that of several other existing feature selection techniques on different real-life microarray gene expression datasets.
Kriebitzsch, Carsten; Verlinden, Lieve; Eelen, Guy; Tan, Biauw Keng; Van Camp, Mark; Bouillon, Roger; Verstuyf, Annemieke
2009-09-01
The active form of vitamin D3, 1alpha,25-dihydroxyvitamin D3 [1,25(OH)2D3], is an important regulator of bone metabolism, calcium and phosphate homeostasis but also has potent antiproliferative and pro-differentiating effects on a wide variety of cell types. To identify key genes that are (directly) regulated by 1,25(OH)2D3, a large number of microarray studies have been performed on different types of cancer cells (prostate, breast, ovarian, colorectal, squamous cell carcinoma and leukemia). The variety of target genes identified through these studies reflects the pleiotropic action of 1,25(OH)2D3. Common cellular processes targeted by 1,25(OH)2D3 in the different cancer cell lines include cell cycle progression, apoptosis, cellular adhesion, oxidative stress, immune function and steroid metabolism. Upon comparison of the lists of genes regulated by 1,25(OH)2D3 in the different microarray studies, only a small set of individual genes were commonly regulated, among which are included 24-hydroxylase, growth arrest and DNA-damage-inducible protein, cathelicidin antimicrobial peptide and multiple cyclins.
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
Gawand, Hemangi Laxman [Homi Bhabha National Institute, Computer Section, BARC, Mumbai (India); Bhattacharjee, A. K. [Reactor Control Division, BARC, Mumbai (India); Roy, Kallol [BHAVINI, Kalpakkam (India)
2017-04-15
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
The Karlqvist approximation revisited
Tannous, C
2015-01-01
The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.
Lin, Guoxing
2016-01-01
Anomalous diffusion exists widely in polymer and biological systems. Pulsed field gradient (PFG) techniques have been increasingly used to study anomalous diffusion in NMR and MRI. However, the interpretation of PFG anomalous diffusion is complicated. Moreover, there is not an exact signal attenuation expression based on fractional derivatives for PFG anomalous diffusion, which includes the finite gradient pulse width effect. In this paper, a new method, a Mainardi-Luchko-Pagnini (MLP) phase distribution approximation, is proposed to describe PFG fractional diffusion. MLP phase distribution is a non-Gaussian phase distribution. From the fractional diffusion equation based on fractional derivatives in both real space and phase space, the obtained probability distribution function is a MLP distribution. The MLP distribution leads to a Mittag-Leffler function based PFG signal attenuation rather than the exponential or stretched exponential attenuation that is obtained from a Gaussian phase distribution (GPD) und...
Jha B.K.
2015-02-01
Full Text Available This paper investigates the role of induced magnetic field on a transient natural convection flow of an electrically conducting, incompressible and viscous fluid in a vertical channel formed by two infinite vertical parallel plates. The transient flow formation inside the channel is due to sudden asymmetric heating of channel walls. The time dependent momentum, energy and magnetic induction equations are solved semi-analytically using the Laplace transform technique along with the Riemann-sum approximation method. The solutions obtained are validated by comparisons with the closed form solutions obtained for the steady states which have been derived separately and also by the implicit finite difference method. Graphical results for the temperature, velocity, induced magnetic field, current density, and skin-friction based on the semi-analytical solutions are presented and discussed.
Lin, Guoxing
2016-11-01
Anomalous diffusion exists widely in polymer and biological systems. Pulsed-field gradient (PFG) techniques have been increasingly used to study anomalous diffusion in nuclear magnetic resonance and magnetic resonance imaging. However, the interpretation of PFG anomalous diffusion is complicated. Moreover, the exact signal attenuation expression including the finite gradient pulse width effect has not been obtained based on fractional derivatives for PFG anomalous diffusion. In this paper, a new method, a Mainardi-Luchko-Pagnini (MLP) phase distribution approximation, is proposed to describe PFG fractional diffusion. MLP phase distribution is a non-Gaussian phase distribution. From the fractional derivative model, both the probability density function (PDF) of a spin in real space and the PDF of the spin's accumulating phase shift in virtual phase space are MLP distributions. The MLP phase distribution leads to a Mittag-Leffler function based PFG signal attenuation, which differs significantly from the exponential attenuation for normal diffusion and from the stretched exponential attenuation for fractional diffusion based on the fractal derivative model. A complete signal attenuation expression Eα(-Dfbα,β * ) including the finite gradient pulse width effect was obtained and it can handle all three types of PFG fractional diffusions. The result was also extended in a straightforward way to give a signal attenuation expression of fractional diffusion in PFG intramolecular multiple quantum coherence experiments, which has an nβ dependence upon the order of coherence which is different from the familiar n2 dependence in normal diffusion. The results obtained in this study are in agreement with the results from the literature. The results in this paper provide a set of new, convenient approximation formalisms to interpret complex PFG fractional diffusion experiments.
"Harshlighting" small blemishes on microarrays
Wittkowski Knut M
2005-03-01
Full Text Available Abstract Background Microscopists are familiar with many blemishes that fluorescence images can have due to dust and debris, glass flaws, uneven distribution of fluids or surface coatings, etc. Microarray scans show similar artefacts, which affect the analysis, particularly when one tries to detect subtle changes. However, most blemishes are hard to find by the unaided eye, particularly in high-density oligonucleotide arrays (HDONAs. Results We present a method that harnesses the statistical power provided by having several HDONAs available, which are obtained under similar conditions except for the experimental factor. This method "harshlights" blemishes and renders them evident. We find empirically that about 25% of our chips are blemished, and we analyze the impact of masking them on screening for differentially expressed genes. Conclusion Experiments attempting to assess subtle expression changes should be carefully screened for blemishes on the chips. The proposed method provides investigators with a novel robust approach to improve the sensitivity of microarray analyses. By utilizing topological information to identify and mask blemishes prior to model based analyses, the method prevents artefacts from confounding the process of background correction, normalization, and summarization.
Shea, April; Wolcott, Mark; Daefler, Simon; Rozak, David A
2012-01-01
Phenotype microarrays nicely complement traditional genomic, transcriptomic, and proteomic analysis by offering opportunities for researchers to ground microbial systems analysis and modeling in a broad yet quantitative assessment of the organism's physiological response to different metabolites and environments. Biolog phenotype assays achieve this by coupling tetrazolium dyes with minimally defined nutrients to measure the impact of hundreds of carbon, nitrogen, phosphorous, and sulfur sources on redox reactions that result from compound-induced effects on the electron transport chain. Over the years, we have used Biolog's reproducible and highly sensitive assays to distinguish closely related bacterial isolates, to understand their metabolic differences, and to model their metabolic behavior using flux balance analysis. This chapter describes Biolog phenotype microarray system components, reagents, and methods, particularly as they apply to bacterial identification, characterization, and metabolic analysis.
Approximate Representations and Approximate Homomorphisms
Moore, Cristopher
2010-01-01
Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensi...
Hung, Jui-Hung; Weng, Zhiping
2017-03-01
Because there is no widely used software for analyzing RNA-seq data that has a graphical user interface, this protocol provides an example of analyzing microarray data using Babelomics. This analysis entails performing quantile normalization and then detecting differentially expressed genes associated with the transgenesis of a human oncogene c-Myc in mice. Finally, hierarchical clustering is performed on the differentially expressed genes using the Cluster program, and the results are visualized using TreeView.
AbIx: An Approach to Content-Based Approximate Query Processing in Peer-to-Peer Data Systems
Chao-Kun Wang; Jian-Min Wang; Jia-Guang Sun; Sheng-Fei Shi; Hong Gao
2007-01-01
In recent years there has been a significant interest in peer-to-peer (P2P) environments in the community ofdata management. However, almost all work, so far, is focused on exact query processing in current P2P data systems.The autonomy of peers also is not considered enough. In addition, the system cost is very high because the informationpublishing method of shared data is based on each document instead of document set. In this paper, abstract indices (AbIx)are presented to implement content-based approximate queries in centralized, distributed and structured P2P data systems.It can be used to search as few peers as possible but get as many returns satisfying users' queries as possible on the guaranteeof high autonomy of peers. Also, abstract indices have low system cost, can improve the query processing speed, and supportvery frequent updates and the set information publishing method. In order to verify the effectiveness of abstract indices, asimulator of 10,000 peers, over 3 million documents is made, and several metrics are proposed. The experimental results showthat abstract indices work well in various P2P data systems.
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Nefedov, Maxim
2015-01-01
The hadroproduction of prompt isolated diphotons at high energies is studied in the NLO^star framework of the Parton Reggeization Approach. The real part of the NLO corrections is computed, and the procedure for the subtraction of double counting between real parton emissions in the hard-scattering matrix element and unintegrated PDF is constructed for the amplitudes with Reggeized quarks in the initial state. The matrix element of the important NNLO subprocess RR->2gamma with full dependence on the transverse momenta of the initial-state partons is obtained. We compare obtained numerical results with diphoton spectra measured at Tevatron and the LHC, and find a good agreement of our predictions with experimental data at the high values of diphoton transverse momentum, p_T, and especially at the p_T larger than the diphoton invariant mass, M. In this multi-Regge kinematics region, the NLO correction is strongly suppressed, demonstrating the self-consistency of the Parton Reggeization Approach.
Chaotic mixer improves microarray hybridization.
McQuain, Mark K; Seale, Kevin; Peek, Joel; Fisher, Timothy S; Levy, Shawn; Stremler, Mark A; Haselton, Frederick R
2004-02-15
Hybridization is an important aspect of microarray experimental design which influences array signal levels and the repeatability of data within an array and across different arrays. Current methods typically require 24h and use target inefficiently. In these studies, we compare hybridization signals obtained in conventional static hybridization, which depends on diffusional target delivery, with signals obtained in a dynamic hybridization chamber, which employs a fluid mixer based on chaotic advection theory to deliver targets across a conventional glass slide array. Microarrays were printed with a pattern of 102 identical probe spots containing a 65-mer oligonucleotide capture probe. Hybridization of a 725-bp fluorescently labeled target was used to measure average target hybridization levels, local signal-to-noise ratios, and array hybridization uniformity. Dynamic hybridization for 1h with 1 or 10ng of target DNA increased hybridization signal intensities approximately threefold over a 24-h static hybridization. Similarly, a 10- or 60-min dynamic hybridization of 10ng of target DNA increased hybridization signal intensities fourfold over a 24h static hybridization. In time course studies, static hybridization reached a maximum within 8 to 12h using either 1 or 10ng of target. In time course studies using the dynamic hybridization chamber, hybridization using 1ng of target increased to a maximum at 4h and that using 10ng of target did not vary over the time points tested. In comparison to static hybridization, dynamic hybridization reduced the signal-to-noise ratios threefold and reduced spot-to-spot variation twofold. Therefore, we conclude that dynamic hybridization based on a chaotic mixer design improves both the speed of hybridization and the maximum level of hybridization while increasing signal-to-noise ratios and reducing spot-to-spot variation.
Background adjustment of cDNA microarray images by Maximum Entropy distributions.
Argyropoulos, Christos; Daskalakis, Antonis; Nikiforidis, George C; Sakellaropoulos, George C
2010-08-01
Many empirical studies have demonstrated the exquisite sensitivity of both traditional and novel statistical and machine intelligence algorithms to the method of background adjustment used to analyze microarray datasets. In this paper we develop a statistical framework that approaches background adjustment as a classic stochastic inverse problem, whose noise characteristics are given in terms of Maximum Entropy distributions. We derive analytic closed form approximations to the combined problem of estimating the magnitude of the background in microarray images and adjusting for its presence. The proposed method reduces standardized measures of log expression variability across replicates in situations of known differential and non-differential gene expression without increasing the bias. Additionally, it results in computationally efficient procedures for estimation and learning based on sufficient statistics and can filter out spot measures with intensities that are numerically close to the background level resulting in a noise reduction of about 7%.
Discovering biological progression underlying microarray samples.
Peng Qiu
2011-04-01
Full Text Available In biological systems that undergo processes such as differentiation, a clear concept of progression exists. We present a novel computational approach, called Sample Progression Discovery (SPD, to discover patterns of biological progression underlying microarray gene expression data. SPD assumes that individual samples of a microarray dataset are related by an unknown biological process (i.e., differentiation, development, cell cycle, disease progression, and that each sample represents one unknown point along the progression of that process. SPD aims to organize the samples in a manner that reveals the underlying progression and to simultaneously identify subsets of genes that are responsible for that progression. We demonstrate the performance of SPD on a variety of microarray datasets that were generated by sampling a biological process at different points along its progression, without providing SPD any information of the underlying process. When applied to a cell cycle time series microarray dataset, SPD was not provided any prior knowledge of samples' time order or of which genes are cell-cycle regulated, yet SPD recovered the correct time order and identified many genes that have been associated with the cell cycle. When applied to B-cell differentiation data, SPD recovered the correct order of stages of normal B-cell differentiation and the linkage between preB-ALL tumor cells with their cell origin preB. When applied to mouse embryonic stem cell differentiation data, SPD uncovered a landscape of ESC differentiation into various lineages and genes that represent both generic and lineage specific processes. When applied to a prostate cancer microarray dataset, SPD identified gene modules that reflect a progression consistent with disease stages. SPD may be best viewed as a novel tool for synthesizing biological hypotheses because it provides a likely biological progression underlying a microarray dataset and, perhaps more importantly, the
Protein microarray applications: Autoantibody detection and posttranslational modification.
Atak, Apurva; Mukherjee, Shuvolina; Jain, Rekha; Gupta, Shabarni; Singh, Vedita Anand; Gahoi, Nikita; K P, Manubhai; Srivastava, Sanjeeva
2016-10-01
The discovery of DNA microarrays was a major milestone in genomics; however, it could not adequately predict the structure or dynamics of underlying protein entities, which are the ultimate effector molecules in a cell. Protein microarrays allow simultaneous study of thousands of proteins/peptides, and various advancements in array technologies have made this platform suitable for several diagnostic and functional studies. Antibody arrays enable researchers to quantify the abundance of target proteins in biological fluids and assess PTMs by using the antibodies. Protein microarrays have been used to assess protein-protein interactions, protein-ligand interactions, and autoantibody profiling in various disease conditions. Here, we summarize different microarray platforms with focus on its biological and clinical applications in autoantibody profiling and PTM studies. We also enumerate the potential of tissue microarrays to validate findings from protein arrays as well as other approaches, highlighting their significance in proteomics.
Microintaglio Printing for Soft Lithography-Based in Situ Microarrays
Manish Biyani
2015-07-01
Full Text Available Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called “microintaglio printing technology”, for large-scale bio-microarray fabrication using a microreactor array (µRA-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density, ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era.
Microintaglio Printing for Soft Lithography-Based in Situ Microarrays.
Biyani, Manish; Ichiki, Takanori
2015-07-14
Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called "microintaglio printing technology", for large-scale bio-microarray fabrication using a microreactor array (µRA)-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing) a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density), ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era.
Textoris, Julien; Ivorra, Delphine; Ben Amara, Amira; Sabatier, Florence; Ménard, Jean-Pierre; Heckenroth, Hélène; Bretelle, Florence; Mege, Jean-Louis
2013-01-01
Preeclampsia is a placental disease characterized by hypertension and proteinuria in pregnant women, and it is associated with a high maternal and neonatal morbidity. However, circulating biomarkers that are able to predict the prognosis of preeclampsia are lacking. Thirty-eight women were included in the current study. They consisted of 19 patients with preeclampsia (13 with severe preeclampsia and 6 with non-severe preeclampsia) and 19 gestational age-matched women with normal pregnancies as controls. We measured circulating factors that are associated with the coagulation pathway (including fibrinogen, fibronectin, factor VIII, antithrombin, protein S and protein C), endothelial activation (such as soluble endoglin and CD146), and the release of total and platelet-derived microparticles. These markers enabled us to discriminate the preeclampsia condition from a normal pregnancy but were not sufficient to distinguish severe from non-severe preeclampsia. We then used a microarray to study the transcriptional signature of blood samples. Preeclampsia patients exhibited a specific transcriptional program distinct from that of the control group of women. Interestingly, we also identified a severity-related transcriptional signature. Functional annotation of the upmodulated signature in severe preeclampsia highlighted two main functions related to "ribosome" and "complement". Finally, we identified 8 genes that were specifically upmodulated in severe preeclampsia compared with non-severe preeclampsia and the normotensive controls. Among these genes, we identified VSIG4 as a potential diagnostic marker of severe preeclampsia. The determination of this gene may improve the prognostic assessment of severe preeclampsia.
Julien Textoris
Full Text Available Preeclampsia is a placental disease characterized by hypertension and proteinuria in pregnant women, and it is associated with a high maternal and neonatal morbidity. However, circulating biomarkers that are able to predict the prognosis of preeclampsia are lacking. Thirty-eight women were included in the current study. They consisted of 19 patients with preeclampsia (13 with severe preeclampsia and 6 with non-severe preeclampsia and 19 gestational age-matched women with normal pregnancies as controls. We measured circulating factors that are associated with the coagulation pathway (including fibrinogen, fibronectin, factor VIII, antithrombin, protein S and protein C, endothelial activation (such as soluble endoglin and CD146, and the release of total and platelet-derived microparticles. These markers enabled us to discriminate the preeclampsia condition from a normal pregnancy but were not sufficient to distinguish severe from non-severe preeclampsia. We then used a microarray to study the transcriptional signature of blood samples. Preeclampsia patients exhibited a specific transcriptional program distinct from that of the control group of women. Interestingly, we also identified a severity-related transcriptional signature. Functional annotation of the upmodulated signature in severe preeclampsia highlighted two main functions related to "ribosome" and "complement". Finally, we identified 8 genes that were specifically upmodulated in severe preeclampsia compared with non-severe preeclampsia and the normotensive controls. Among these genes, we identified VSIG4 as a potential diagnostic marker of severe preeclampsia. The determination of this gene may improve the prognostic assessment of severe preeclampsia.
Finis, Katharina; Sültmann, Holger; Ruschhaupt, Markus; Buness, Andreas; Helmchen, Birgit; Kuner, Ruprecht; Gross, Marie-Luise; Fink, Bernd; Schirmacher, Peter; Poustka, Annemarie; Berger, Irina
2006-03-01
To characterize the gene expression profile and determine potential diagnostic markers and therapeutic targets in pigmented villonodular synovitis (PVNS). Gene expression patterns in 11 patients with PVNS, 18 patients with rheumatoid arthritis (RA), and 19 patients with osteoarthritis (OA) were investigated using genome-wide complementary DNA microarrays. Validation of differentially expressed genes was performed by real-time quantitative polymerase chain reaction and immunohistochemical analysis on tissue arrays (80 patients with PVNS, 51 patients with RA, and 20 patients with OA). The gene expression profile in PVNS was clearly distinct from those in RA and OA. One hundred forty-one up-regulated genes and 47 down-regulated genes were found in PVNS compared with RA, and 153 up-regulated genes and 89 down-regulated genes were found in PVNS compared with OA (fold change > or = 1.5; Q PVNS were involved in apoptosis regulation, matrix degradation, and inflammation (ALOX5AP, ATP6V1B2, CD53, CHI3L1, CTSL, CXCR4, HSPA8, HSPCA, LAPTM5, MMP9, MOAP1, and SPP1). The gene expression signature in PVNS is similar to that of activated macrophages and is consistent with the local destructive course of the disease. The gene and protein expression patterns suggest that the ongoing proliferation in PVNS is sustained by apoptosis resistance. This result suggests the possibility of a potential novel therapeutic intervention against PVNS.
城市近自然园林景观的实现途径%Approaches to Realize Nature-approximating Landscape Architecture in City
陈华生
2012-01-01
近自然园林是实现生态园林的重要途径,是城市园林的发展趋势。文中分析近自然园林的含义及特点,从设计、施工、管理等方面论述城市近自然园林的实现途径。%Nature-approximating landscape architecture is an important way to build ecological gardens, and presents the development trend of urban landscaping. This paper analyzed the definition and characteristic of nature-approxima- ting landscaping, and discussed the approaches to build urban nature-approximating garden with regard to disign, con- struction and management.
Approximate calculation of integrals
Krylov, V I
2006-01-01
A systematic introduction to the principal ideas and results of the contemporary theory of approximate integration, this volume approaches its subject from the viewpoint of functional analysis. In addition, it offers a useful reference for practical computations. Its primary focus lies in the problem of approximate integration of functions of a single variable, rather than the more difficult problem of approximate integration of functions of more than one variable.The three-part treatment begins with concepts and theorems encountered in the theory of quadrature. The second part is devoted to t
Compressive Sensing DNA Microarrays
Richard G. Baraniuk
2009-01-01
Full Text Available Compressive sensing microarrays (CSMs are DNA-based sensors that operate using group testing and compressive sensing (CS principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed.
Rao, D V; Brunetti, A; Gigante, G E; Takeda, T; Itai, Y; Akatsuka, T
2002-01-01
A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K alpha radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)
Biyani, Manish; Moriyasu, Junpei; Tanaka, Yoko; Sato, Shusuke; Ueno, Shingo; Ichiki, Takanori
2013-08-01
A simple and versatile approach to the simultaneous on-chip synthesis and printing of proteins has been studied for high-density protein microarray applications. The method used is based on the principle of intaglio printing using microengraved plates. Unlike conventional approaches that require multistep reactions for synthesizing proteins off the chip followed by printing using a robotic spotter, our approach demonstrates the following: (i) parallel and spotter-free printing of high-density protein microarrays directly from a type of DNA microarray and (ii) microcompartmentalization of cell-free coupled transcription/translation reaction and direct transferring of picoliter protein solution per spot to pattern microarrays of 25-100 µm features.
Development of a spot reliability evaluation score for DNA microarrays.
Matsumura, Yonehiro; Shimokawa, Kazuro; Hayashizaki, Yoshihide; Ikeo, Kazuho; Tateno, Yoshio; Kawai, Jun
2005-05-09
We developed a reliability index named SRED (Spot Reliability Evaluation Score for DNA microarrays) that represents the probability that the calibrated gene expression level from a DNA microarray would be less than a factor of 2 different from that of quantitative real-time polymerase chain reaction assays whose dynamic quantification range is treated statistically to be similar to that of the DNA microarray. To define the SRED score, two parameters, the reproducibility of measurement value and the relative expression value were selected from nine candidate parameters. The SRED score supplies the probability that the expression level in each spot of a microarray is less than a certain-fold different compared to other expression profiling data, such as QRT-PCR. This score was applied to approximately 1,500,000 points of the expression profile in the RIKEN Expression Array Database.
The EADGENE Microarray Data Analysis Workshop (Open Access publication
Jiménez-Marín Ángeles
2007-11-01
Full Text Available Abstract Microarray analyses have become an important tool in animal genomics. While their use is becoming widespread, there is still a lot of ongoing research regarding the analysis of microarray data. In the context of a European Network of Excellence, 31 researchers representing 14 research groups from 10 countries performed and discussed the statistical analyses of real and simulated 2-colour microarray data that were distributed among participants. The real data consisted of 48 microarrays from a disease challenge experiment in dairy cattle, while the simulated data consisted of 10 microarrays from a direct comparison of two treatments (dye-balanced. While there was broader agreement with regards to methods of microarray normalisation and significance testing, there were major differences with regards to quality control. The quality control approaches varied from none, through using statistical weights, to omitting a large number of spots or omitting entire slides. Surprisingly, these very different approaches gave quite similar results when applied to the simulated data, although not all participating groups analysed both real and simulated data. The workshop was very successful in facilitating interaction between scientists with a diverse background but a common interest in microarray analyses.
Rouzaud, C.; Gatuingt, F.; Hervé, G.; Dorival, O.
2017-03-01
Frequency-based methods were set up in order to circumvent the limits of classical finite element methods in fast dynamic simulations due to discretizations. In this approach the dynamic loading was shifted in the frequency domain by FFT, then treated by the Variational Theory of Complex Rays, and then the time response was reconstructed through an IFFT. This strategy proved to be very efficient due to the CPU VTCR very low cost. However in the case of a large loading spectrum this frequency-by-frequency approach could seriously degrade the computational performances of the strategy. This paper addresses this point by proposing the use of Padé approximants in order to limit the number of frequencies at which the response should be calculated. Padé approximation is applied to the overall VTCR system based on its frequency dependency. Finally, as simulations on a simple academic case and on a civil engineering structure show, this method is found to be very efficient for interpolating the frequency response functions of a complex structure. This is a key point to preserve the efficiency of the complete VTCR strategy for transient dynamic problems.
DNA Microarray-Based Diagnostics.
Marzancola, Mahsa Gharibi; Sedighi, Abootaleb; Li, Paul C H
2016-01-01
The DNA microarray technology is currently a useful biomedical tool which has been developed for a variety of diagnostic applications. However, the development pathway has not been smooth and the technology has faced some challenges. The reliability of the microarray data and also the clinical utility of the results in the early days were criticized. These criticisms added to the severe competition from other techniques, such as next-generation sequencing (NGS), impacting the growth of microarray-based tests in the molecular diagnostic market.Thanks to the advances in the underlying technologies as well as the tremendous effort offered by the research community and commercial vendors, these challenges have mostly been addressed. Nowadays, the microarray platform has achieved sufficient standardization and method validation as well as efficient probe printing, liquid handling and signal visualization. Integration of various steps of the microarray assay into a harmonized and miniaturized handheld lab-on-a-chip (LOC) device has been a goal for the microarray community. In this respect, notable progress has been achieved in coupling the DNA microarray with the liquid manipulation microsystem as well as the supporting subsystem that will generate the stand-alone LOC device.In this chapter, we discuss the major challenges that microarray technology has faced in its almost two decades of development and also describe the solutions to overcome the challenges. In addition, we review the advancements of the technology, especially the progress toward developing the LOC devices for DNA diagnostic applications.
Triple-target microarray experiments: a novel experimental strategy
Cooke Howard J
2004-02-01
Full Text Available Abstract Background High-throughput, parallel gene expression analysis by means of microarray technology has become a widely used technique in recent years. There are currently two main dye-labelling strategies for microarray studies based on custom-spotted cDNA or oligonucleotides arrays: (I Dye-labelling of a single target sample with a particular dye, followed by subsequent hybridisation to a single microarray slide, (II Dye-labelling of two different target samples with two different dyes, followed by subsequent co-hybridisation to a single microarray slide. The two dyes most frequently used for either method are Cy3 and Cy5. We propose and evaluate a novel experiment set-up utilising three differently labelled targets co-hybridised to one microarray slide. In addition to Cy3 and Cy5, this incorporates Alexa 594 as a third dye-label. We evaluate this approach in line with current data processing and analysis techniques for microarrays, and run separate analyses on Alexa 594 used in single-target, dual-target and the intended triple-target experiment set-ups (a total of 18 microarray slides. We follow this by pointing out practical applications and suitable analysis methods, and conclude that triple-target microarray experiments can add value to microarray research by reducing material costs for arrays and related processes, and by increasing the number of options for pragmatic experiment design. Results The addition of Alexa 594 as a dye-label for an additional – third – target sample works within the framework of more commonplace Cy5/Cy3 labelled target sample combinations. Standard normalisation methods are still applicable, and the resulting data can be expected to allow identification of expression differences in a biological experiment, given sufficient levels of biological replication (as is necessary for most microarray experiments. Conclusion The use of three dye-labelled target samples can be a valuable addition to the standard
Seung Oh Lee
2013-10-01
Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.
Rüger, Robert; Heine, Thomas; Visscher, Lucas
2016-01-01
We propose a new method of calculating electronically excited states that combines a density functional theory (DFT) based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive time-dependent density functional theory (TD-DFT) calculations. Errors in vertical excitation energies are reduced by a factor of two compared to TD-DFTB.
Carbohydrate Microarrays in Plant Science
Fangel, Jonatan Ulrik; Pedersen, H.L.; Vidal-Melgosa, S.
2012-01-01
industrially and nutritionally. Understanding the biological roles of plant glycans and the effective exploitation of their useful properties requires a detailed understanding of their structures, occurrence, and molecular interactions. Microarray technology has revolutionized the massively high......-throughput analysis of nucleotides, proteins, and increasingly carbohydrates. Using microarrays, the abundance of and interactions between hundreds and thousands of molecules can be assessed simultaneously using very small amounts of analytes. Here we show that carbohydrate microarrays are multifunctional tools...... for plant research and can be used to map glycan populations across large numbers of samples to screen antibodies, carbohydrate binding proteins, and carbohydrate binding modules and to investigate enzyme activities....
Transfection microarray and the applications.
Miyake, Masato; Yoshikawa, Tomohiro; Fujita, Satoshi; Miyake, Jun
2009-05-01
Microarray transfection has been extensively studied for high-throughput functional analysis of mammalian cells. However, control of efficiency and reproducibility are the critical issues for practical use. By using solid-phase transfection accelerators and nano-scaffold, we provide a highly efficient and reproducible microarray-transfection device, "transfection microarray". The device would be applied to the limited number of available primary cells and stem cells not only for large-scale functional analysis but also reporter-based time-lapse cellular event analysis.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Maksim Duškin
2015-11-01
Full Text Available Approximation and supposition This article compares exponents of approximation (expressions like Russian около, примерно, приблизительно, более, свыше and the words expressing supposition (for example Russian скорее всего, наверное, возможно. These words are often confused in research, in particular researchers often mention exponents of supposition in case of exponents of approximation. Such approach arouses some objections. The author intends to demonstrate in this article a notional difference between approximation and supposition, therefore the difference between exponents of these two notions. This difference could be described by specifying different attitude of approximation and supposition to the notion of knowledge. Supposition implies speaker’s ignorance of the exact number, while approximation does not mean such ignorance. The article offers examples proving this point of view.
Cell-Based Microarrays for In Vitro Toxicology.
Wegener, Joachim
2015-01-01
DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.
Cell-Based Microarrays for In Vitro Toxicology
Wegener, Joachim
2015-07-01
DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.
Ekaterina Kotelnikova
2012-02-01
Full Text Available Elucidation of new biomarkers and potential drug targets from high-throughput profiling data is a challenging task due to a limited number of available biological samples and questionable reproducibility of differential changes in cross-dataset comparisons. In this paper we propose a novel computational approach for drug and biomarkers discovery using comprehensive analysis of multiple expression profiling datasets.The new method relies on aggregation of individual profiling experiments combined with leave-one-dataset-out validation approach. Aggregated datasets were studied using Sub-Network Enrichment Analysis algorithm (SNEA to find consistent statistically significant key regulators within the global literature-extracted expression regulation network. These regulators were linked to the consistent differentially expressed genes.We have applied our approach to several publicly available human muscle gene expression profiling datasets related to Duchenne muscular dystrophy (DMD. In order to detect both enhanced and repressed processes we considered up- and down-regulated genes separately. Applying the proposed approach to the regulators search we discovered the disturbance in the activity of several muscle-related transcription factors (e.g. MYOG and MYOD1, regulators of inflammation, regeneration, and fibrosis. Almost all SNEA-derived regulators of down-regulated genes (e.g. AMPK, TORC2, PPARGC1A correspond to a single common pathway important for fast-to-slow twitch fiber type transition. We hypothesize that this process can affect the severity of DMD symptoms, making corresponding regulators and downstream genes valuable candidates for being potential drug targets and exploratory biomarkers.
Design and analysis of mismatch probes for long oligonucleotide microarrays
Deng, Ye; He, Zhili; Van Nostrand, Joy D.; Zhou, Jizhong
2008-08-15
Nonspecific hybridization is currently a major concern with microarray technology. One of most effective approaches to estimating nonspecific hybridizations in oligonucleotide microarrays is the utilization of mismatch probes; however, this approach has not been used for longer oligonucleotide probes. Here, an oligonucleotide microarray was constructed to evaluate and optimize parameters for 50-mer mismatch probe design. A perfect match (PM) and 28 mismatch (MM) probes were designed for each of ten target genes selected from three microorganisms. The microarrays were hybridized with synthesized complementary oligonucleotide targets at different temperatures (e.g., 42, 45 and 50 C). In general, the probes with evenly distributed mismatches were more distinguishable than those with randomly distributed mismatches. MM probes with 3, 4 and 5 mismatched nucleotides were differentiated for 50-mer oligonucleotide probes hybridized at 50, 45 and 42 C, respectively. Based on the experimental data generated from this study, a modified positional dependent nearest neighbor (MPDNN) model was constructed to adjust the thermodynamic parameters of matched and mismatched dimer nucleotides in the microarray environment. The MM probes with four flexible positional mismatches were designed using the newly established MPDNN model and the experimental results demonstrated that the redesigned MM probes could yield more consistent hybridizations. Conclusions: This study provides guidance on the design of MM probes for long oligonucleotides (e.g., 50 mers). The novel MPDNN model has improved the consistency for long MM probes, and this modeling method can potentially be used for the prediction of oligonucleotide microarray hybridizations.
Design and analysis of mismatch probes for long oligonucleotide microarrays
Deng, Ye; He, Zhili; Van Nostrand, Joy D.; Zhou, Jizhong
2008-08-15
Nonspecific hybridization is currently a major concern with microarray technology. One of most effective approaches to estimating nonspecific hybridizations in oligonucleotide microarrays is the utilization of mismatch probes; however, this approach has not been used for longer oligonucleotide probes. Here, an oligonucleotide microarray was constructed to evaluate and optimize parameters for 50-mer mismatch probe design. A perfect match (PM) and 28 mismatch (MM) probes were designed for each of ten target genes selected from three microorganisms. The microarrays were hybridized with synthesized complementary oligonucleotide targets at different temperatures (e.g., 42, 45 and 50 C). In general, the probes with evenly distributed mismatches were more distinguishable than those with randomly distributed mismatches. MM probes with 3, 4 and 5 mismatched nucleotides were differentiated for 50-mer oligonucleotide probes hybridized at 50, 45 and 42 C, respectively. Based on the experimental data generated from this study, a modified positional dependent nearest neighbor (MPDNN) model was constructed to adjust the thermodynamic parameters of matched and mismatched dimer nucleotides in the microarray environment. The MM probes with four flexible positional mismatches were designed using the newly established MPDNN model and the experimental results demonstrated that the redesigned MM probes could yield more consistent hybridizations. Conclusions: This study provides guidance on the design of MM probes for long oligonucleotides (e.g., 50 mers). The novel MPDNN model has improved the consistency for long MM probes, and this modeling method can potentially be used for the prediction of oligonucleotide microarray hybridizations.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments.
Microarray Scanner for Fluorescence Detection
Wang Liqiang; Lu zukang; Li Yingsheng; Zheng Xufeng
2003-01-01
A novel pseudo confocal microarray scanner is introduced, in which one dimension scanning is performed by a galvanometer optical scanner and a telecentric objective, another dimension scanning is performed by a stepping motor.
KUDRYAVTSEV Pavel Gennadievich
2015-02-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
Microarrays for rapid identification of plant viruses.
Boonham, Neil; Tomlinson, Jenny; Mumford, Rick
2007-01-01
Many factors affect the development and application of diagnostic techniques. Plant viruses are an inherently diverse group that, unlike cellular pathogens, possess no nucleotide sequence type (e.g., ribosomal RNA sequences) in common. Detection of plant viruses is becoming more challenging as globalization of trade, particularly in ornamentals, and the potential effects of climate change enhance the movement of viruses and their vectors, transforming the diagnostic landscape. Techniques for assessing seed, other propagation materials and field samples for the presence of specific viruses include biological indexing, electron microscopy, antibody-based detection, including enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and microarray detection. Of these, microarray detection provides the greatest capability for parallel yet specific testing, and can be used to detect individual, or combinations of viruses and, using current approaches, to do so with a sensitivity comparable to ELISA. Methods based on PCR provide the greatest sensitivity among the listed techniques but are limited in parallel detection capability even in "multiplexed" applications. Various aspects of microarray technology, including probe development, array fabrication, assay target preparation, hybridization, washing, scanning, and interpretation are presented and discussed, for both current and developing technology.
Linking microarray reporters with protein functions
Gaj Stan
2007-09-01
Full Text Available Abstract Background The analysis of microarray experiments requires accurate and up-to-date functional annotation of the microarray reporters to optimize the interpretation of the biological processes involved. Pathway visualization tools are used to connect gene expression data with existing biological pathways by using specific database identifiers that link reporters with elements in the pathways. Results This paper proposes a novel method that aims to improve microarray reporter annotation by BLASTing the original reporter sequences against a species-specific EMBL subset, that was derived from and crosslinked back to the highly curated UniProt database. The resulting alignments were filtered using high quality alignment criteria and further compared with the outcome of a more traditional approach, where reporter sequences were BLASTed against EnsEMBL followed by locating the corresponding protein (UniProt entry for the high quality hits. Combining the results of both methods resulted in successful annotation of > 58% of all reporter sequences with UniProt IDs on two commercial array platforms, increasing the amount of Incyte reporters that could be coupled to Gene Ontology terms from 32.7% to 58.3% and to a local GenMAPP pathway from 9.6% to 16.7%. For Agilent, 35.3% of the total reporters are now linked towards GO nodes and 7.1% on local pathways. Conclusion Our methods increased the annotation quality of microarray reporter sequences and allowed us to visualize more reporters using pathway visualization tools. Even in cases where the original reporter annotation showed the correct description the new identifiers often allowed improved pathway and Gene Ontology linking. These methods are freely available at http://www.bigcat.unimaas.nl/public/publications/Gaj_Annotation/.
Singh, A; Saini, B S; Singh, D
2016-05-01
This study presents an alternative approach to approximate entropy (ApEn) threshold value (r) selection. There are two limitations of traditional ApEn algorithm: (1) the occurrence of undefined conditional probability (CPu) where no template match is found and (2) use of a crisp tolerance (radius) threshold 'r'. To overcome these limitations, CPu is substituted with optimum bias setting ɛ opt which is found by varying ɛ from (1/N - m) to 1 in the increments of 0.05, where N is the length of the series and m is the embedding dimension. Furthermore, an alternative approach for selection of r based on binning the distance values obtained by template matching to calculate ApEnbin is presented. It is observed that ApEnmax, ApEnchon and ApEnbin converge for ɛ opt = 0.6 in 50 realizations (n = 50) of random number series of N = 300. Similar analysis suggests ɛ opt = 0.65 and ɛ opt = 0.45 for 50 realizations each of fractional Brownian motion and MIX(P) series (Lu et al. in J Clin Monit Comput 22(1):23-29, 2008). ɛ opt = 0.5 is suggested for heart rate variability (HRV) and systolic blood pressure variability (SBPV) signals obtained from 50 young healthy subjects under supine and upright position. It is observed that (1) ApEnbin of HRV is lower than SBPV, (2) ApEnbin of HRV increases from supine to upright due to vagal inhibition and (3) ApEnbin of BPV decreases from supine to upright due to sympathetic activation. Moreover, merit of ApEnbin is that it provides an alternative to the cumbersome ApEnmax procedure.
Microarray Technologies in Fungal Diagnostics.
Rupp, Steffen
2017-01-01
Microarray technologies have been a major research tool in the last decades. In addition they have been introduced into several fields of diagnostics including diagnostics of infectious diseases. Microarrays are highly parallelized assay systems that initially were developed for multiparametric nucleic acid detection. From there on they rapidly developed towards a tool for the detection of all kind of biological compounds (DNA, RNA, proteins, cells, nucleic acids, carbohydrates, etc.) or their modifications (methylation, phosphorylation, etc.). The combination of closed-tube systems and lab on chip devices with microarrays further enabled a higher automation degree with a reduced contamination risk. Microarray-based diagnostic applications currently complement and may in the future replace classical methods in clinical microbiology like blood cultures, resistance determination, microscopic and metabolic analyses as well as biochemical or immunohistochemical assays. In addition, novel diagnostic markers appear, like noncoding RNAs and miRNAs providing additional room for novel nucleic acid based biomarkers. Here I focus an microarray technologies in diagnostics and as research tools, based on nucleic acid-based arrays.
Casida, Mark E.; Salahub, Dennis R.
2000-11-01
The time-dependent density functional theory (TD-DFT) calculation of excitation spectra places certain demands on the DFT exchange-correlation potential, vxc, that are not met by the functionals normally used in molecular calculations. In particular, for high-lying excitations, it is crucial that the asymptotic behavior of vxc be correct. In a previous paper, we introduced a novel asymptotic-correction approach which we used with the local density approximation (LDA) to yield an asymptotically corrected LDA (AC-LDA) potential [Casida, Casida, and Salahub, Int. J. Quantum Chem. 70, 933 (1998)]. The present paper details the theory underlying this asymptotic correction approach, which involves a constant shift to incorporate the effect of the derivative discontinuity (DD) in the bulk region of finite systems, and a spliced asymptotic correction in the large r region. This is done without introducing any adjustable parameters. We emphasize that correcting the asymptotic behavior of vxc is not by itself sufficient to improve the overall form of the potential unless the effect of the derivative discontinuity is taken into account. The approach could be used to correct vxc from any of the commonly used gradient-corrected functionals. It is here applied to the LDA, using the asymptotically correct potential of van Leeuwen and Baerends (LB94) in the large r region. The performance of our AC-LDA vxc is assessed for the calculation of TD-DFT excitation energies for a large number of excitations, including both valence and Rydberg states, for each of four small molecules: N2, CO, CH2O, and C2H4. The results show a significant improvement over those from either the LB94 or the LDA functionals. This confirms that the DD is indeed an important element in the design of functionals. The quality of TDLDA/LB94 and TDLDA/AC-LDA oscillator strengths were also assessed in what we believe to be the first rigorous assessment of TD-DFT molecular oscillator strengths in comparison with
Carbohydrate microarrays in plant science.
Fangel, Jonatan U; Pedersen, Henriette L; Vidal-Melgosa, Silvia; Ahl, Louise I; Salmean, Armando Asuncion; Egelund, Jack; Rydahl, Maja Gro; Clausen, Mads H; Willats, William G T
2012-01-01
Almost all plant cells are surrounded by glycan-rich cell walls, which form much of the plant body and collectively are the largest source of biomass on earth. Plants use polysaccharides for support, defense, signaling, cell adhesion, and as energy storage, and many plant glycans are also important industrially and nutritionally. Understanding the biological roles of plant glycans and the effective exploitation of their useful properties requires a detailed understanding of their structures, occurrence, and molecular interactions. Microarray technology has revolutionized the massively high-throughput analysis of nucleotides, proteins, and increasingly carbohydrates. Using microarrays, the abundance of and interactions between hundreds and thousands of molecules can be assessed simultaneously using very small amounts of analytes. Here we show that carbohydrate microarrays are multifunctional tools for plant research and can be used to map glycan populations across large numbers of samples to screen antibodies, carbohydrate binding proteins, and carbohydrate binding modules and to investigate enzyme activities.
Glass slides to DNA microarrays
Samuel D Conzone
2004-03-01
Full Text Available A tremendous interest in deoxyribonucleic acid (DNA characterization tools was spurred by the mapping and sequencing of the human genome. New tools were needed, beginning in the early 1990s, to cope with the unprecedented amount of genomic information that was being discovered. Such needs led to the development of DNA microarrays; tiny gene-based sensors traditionally prepared on coated glass microscope slides. The following review is intended to provide historical insight into the advent of the DNA microarray, followed by a description of the technology from both the application and fabrication points of view. Finally, the unmet challenges and needs associated with DNA microarrays will be described to define areas of potential future developments for the materials researcher.
Performance comparison of SLFN training algorithms for DNA microarray classification.
Huynh, Hieu Trung; Kim, Jung-Ja; Won, Yonggwan
2011-01-01
The classification of biological samples measured by DNA microarrays has been a major topic of interest in the last decade, and several approaches to this topic have been investigated. However, till now, classifying the high-dimensional data of microarrays still presents a challenge to researchers. In this chapter, we focus on evaluating the performance of the training algorithms of the single hidden layer feedforward neural networks (SLFNs) to classify DNA microarrays. The training algorithms consist of backpropagation (BP), extreme learning machine (ELM) and regularized least squares ELM (RLS-ELM), and an effective algorithm called neural-SVD has recently been proposed. We also compare the performance of the neural network approaches with popular classifiers such as support vector machine (SVM), principle component analysis (PCA) and fisher discriminant analysis (FDA).
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Approach of EEG detection based on ELM and approximate entropy%基于ELM和近似熵的脑电信号检测方法
袁琦; 周卫东; 李淑芳; 蔡冬梅
2012-01-01
The automatic detection and classification of epileptic EEG are significant in diagnosis of epilepsy. This paper presents a new EEG detection approach based on extreme learning machine ( ELM) and approximate entropy ( ApEn). Firstly, the value of the ApEn is calculated as the non-linear feature. Then wavelet transform is used to compute the fluctuation indices as linear features. Secondly, ELM algorithm is employed to train single-hidden layer feedforward neural network (SLFN) with the features. Finally, the SLFN is tested. Experiments demonstrate that compared with BP algorithm and support vector machine ( SVM), the performance of ELM is better in terms of training time and classification accuracy, and a satisfied classification rate more than 98% for interictal and ictal EEGs is achieved.%脑电癫痫波的自动检测与分类对癫痫病情的诊断具有重要意义.提出了一种基于极端学习机(extreme learning machine,ELM)和近似熵的脑电信号检测方法.首先,计算脑电信号的近似熵作为非线性特征,并与利用小波变换技术提取的线性特征波动指数相结合,组成特征向量,然后将特征向量送入单隐层前馈神经网络,采用ELM学习算法训练网络.实验表明,与BP(backpropagation)和SVM(support vector machine)算法相比,ELM在训练时间和识别精度两方面性能最佳,对用于实验的脑电数据检测识别率达到98％以上.
Some results in Diophantine approximation
the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered...
Combining microarrays and genetic analysis
Alberts, Rudi; Fu, Jingyuan; Swertz, Morris A.; Lubbers, L. Alrik; Albers, Casper J.; Jansen, Ritsert C.
2005-01-01
Gene expression can be studied at a genome-wide scale with the aid of modern microarray technologies. Expression profiling of tens to hundreds of individuals in a genetic population can reveal the consequences of genetic variation. In this paper it is argued that the design and analysis of such a
Combining microarrays and genetic analysis
Alberts, Rudi; Fu, Jingyuan; Swertz, Morris A.; Lubbers, L. Alrik; Albers, Casper J.; Jansen, Ritsert C.
2005-01-01
Gene expression can be studied at a genome-wide scale with the aid of modern microarray technologies. Expression profiling of tens to hundreds of individuals in a genetic population can reveal the consequences of genetic variation. In this paper it is argued that the design and analysis of such a st
Picky: oligo microarray design for large genomes
Chou, Hui-Hsien; Hsia, An-Ping; Mooney, Denise L; Schnable, Patrick S
2004-01-01
Many large genomes are getting sequenced nowadays. Biologists are eager to start microarray analysis taking advantage of all known genes of a species, but existing microarray design tools were very inefficient for large genomes...
Mallet, Vivien; Sportisse, Bruno
2006-01-01
International audience; This paper estimates the uncertainty in the outputs of a chemistry-transport model due to physical parameterizations and numerical approximations. An ensemble of 20 simulations is generated from a reference simulation in which one key parameterization (chemical mechanism, dry deposition parameterization, turbulent closure, etc.) or one numerical approximation (grid size, splitting method, etc.) is changed at a time. Intercomparisons of the simulations and comparisons w...
Eils Roland
2005-11-01
Full Text Available Abstract Background The extensive use of DNA microarray technology in the characterization of the cell transcriptome is leading to an ever increasing amount of microarray data from cancer studies. Although similar questions for the same type of cancer are addressed in these different studies, a comparative analysis of their results is hampered by the use of heterogeneous microarray platforms and analysis methods. Results In contrast to a meta-analysis approach where results of different studies are combined on an interpretative level, we investigate here how to directly integrate raw microarray data from different studies for the purpose of supervised classification analysis. We use median rank scores and quantile discretization to derive numerically comparable measures of gene expression from different platforms. These transformed data are then used for training of classifiers based on support vector machines. We apply this approach to six publicly available cancer microarray gene expression data sets, which consist of three pairs of studies, each examining the same type of cancer, i.e. breast cancer, prostate cancer or acute myeloid leukemia. For each pair, one study was performed by means of cDNA microarrays and the other by means of oligonucleotide microarrays. In each pair, high classification accuracies (> 85% were achieved with training and testing on data instances randomly chosen from both data sets in a cross-validation analysis. To exemplify the potential of this cross-platform classification analysis, we use two leukemia microarray data sets to show that important genes with regard to the biology of leukemia are selected in an integrated analysis, which are missed in either single-set analysis. Conclusion Cross-platform classification of multiple cancer microarray data sets yields discriminative gene expression signatures that are found and validated on a large number of microarray samples, generated by different laboratories and
Direct calibration of PICKY-designed microarrays
Ronald Pamela C
2009-10-01
Full Text Available Abstract Background Few microarrays have been quantitatively calibrated to identify optimal hybridization conditions because it is difficult to precisely determine the hybridization characteristics of a microarray using biologically variable cDNA samples. Results Using synthesized samples with known concentrations of specific oligonucleotides, a series of microarray experiments was conducted to evaluate microarrays designed by PICKY, an oligo microarray design software tool, and to test a direct microarray calibration method based on the PICKY-predicted, thermodynamically closest nontarget information. The complete set of microarray experiment results is archived in the GEO database with series accession number GSE14717. Additional data files and Perl programs described in this paper can be obtained from the website http://www.complex.iastate.edu under the PICKY Download area. Conclusion PICKY-designed microarray probes are highly reliable over a wide range of hybridization temperatures and sample concentrations. The microarray calibration method reported here allows researchers to experimentally optimize their hybridization conditions. Because this method is straightforward, uses existing microarrays and relatively inexpensive synthesized samples, it can be used by any lab that uses microarrays designed by PICKY. In addition, other microarrays can be reanalyzed by PICKY to obtain the thermodynamically closest nontarget information for calibration.
Smith Maria W
2006-05-01
Full Text Available Abstract Background Many model systems of human viral disease involve human-mouse chimeric tissue. One such system is the recently developed SCID-beige/Alb-uPA mouse model of hepatitis C virus (HCV infection which involves a human-mouse chimeric liver. The use of functional genomics to study HCV infection in these chimeric tissues is complicated by the potential cross-hybridization of mouse mRNA on human oligonucleotide microarrays. To identify genes affected by mouse liver mRNA hybridization, mRNA from identical human liver samples labeled with either Cy3 or Cy5 was compared in the presence and absence of known amounts of mouse liver mRNA labeled in only one dye. Results The results indicate that hybridization of mouse mRNA to the corresponding human gene probe on Agilent Human 22 K oligonucleotide microarray does occur. The number of genes affected by such cross-hybridization was subsequently reduced to approximately 300 genes both by increasing the hybridization temperature and using liver samples which contain at least 80% human tissue. In addition, Real Time quantitative RT-PCR using human specific probes was shown to be a valid method to verify the expression level in human cells of known cross-hybridizing genes. Conclusion The identification of genes affected by cross-hybridization of mouse liver RNA on human oligonucleotide microarrays makes it feasible to use functional genomics approaches to study the chimeric SCID-beige/Alb-uPA mouse model of HCV infection. This approach used to study cross-species hybridization on oligonucleotide microarrays can be adapted to other chimeric systems of viral disease to facilitate selective analysis of human gene expression.
Approximate Implicitization Using Linear Algebra
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Microarray-based Identification of Novel Biomarkers in Asthma
Kenji Izuhara
2006-01-01
Full Text Available Bronchial asthma is a complicated and diverse disorder affected by genetic and environmental factors. It is widely accepted that it is a Th2-type inflammation originating in lung and caused by inhalation of ubiquitous allergens. The complicated and diverse pathogenesis of this disease yet to be clarified. Functional genomics is the analysis of whole gene expression profiling under given condition, and microarray technology is now the most powerful tool for functional genomics. Several attempts to clarify the pathogenesis of bronchial asthma have been carried out using microarray technology, providing us some novel biomarkers for diagnosis, therapeutic targets or understanding pathogenic mechanisms of bronchial asthma. In this article, we review the outcomes of these analyses by the microarray approach as applied to this disease by focusing on the identification of novel biomarkers.
Comparative analysis of genomic signal processing for microarray data clustering.
Istepanian, Robert S H; Sungoor, Ala; Nebel, Jean-Christophe
2011-12-01
Genomic signal processing is a new area of research that combines advanced digital signal processing methodologies for enhanced genetic data analysis. It has many promising applications in bioinformatics and next generation of healthcare systems, in particular, in the field of microarray data clustering. In this paper we present a comparative performance analysis of enhanced digital spectral analysis methods for robust clustering of gene expression across multiple microarray data samples. Three digital signal processing methods: linear predictive coding, wavelet decomposition, and fractal dimension are studied to provide a comparative evaluation of the clustering performance of these methods on several microarray datasets. The results of this study show that the fractal approach provides the best clustering accuracy compared to other digital signal processing and well known statistical methods.
Pablo Aires Araujo; Maristela da Silva Souza; João Francisco Magno Ribas
2014-01-01
The present study aims to propose an approach within the field theoretical and methodological between the approach and critical-surpassing and the conceptual elements of motor praxeology in order to...
Hiroko Kubo
Full Text Available The use of lavender oil (LO--a commonly, used oil in aromatherapy, with well-defined volatile components linalool and linalyl acetate--in non-traditional medicine is increasing globally. To understand and demonstrate the potential positive effects of LO on the body, we have established an animal model in this current study, investigating the orally administered LO effects genome wide in the rat small intestine, spleen, and liver. The rats were administered LO at 5 mg/kg (usual therapeutic dose in humans followed by the screening of differentially expressed genes in the tissues, using a 4×44-K whole-genome rat chip (Agilent microarray platform; Agilent Technologies, Palo Alto, CA, USA in conjunction with a dye-swap approach, a novelty of this study. Fourteen days after LO treatment and compared with a control group (sham, a total of 156 and 154 up (≧ 1.5-fold- and down (≦ 0.75-fold-regulated genes, 174 and 66 up- (≧ 1.5-fold- and down (≦ 0.75-fold-regulated genes, and 222 and 322 up- (≧ 1.5-fold- and down (≦ 0.75-fold-regulated genes showed differential expression at the mRNA level in the small intestine, spleen and liver, respectively. The reverse transcription-polymerase chain reaction (RT-PCR validation of highly up- and down-regulated genes confirmed the regulation of the Papd4, Lrp1b, Alb, Cyr61, Cyp2c, and Cxcl1 genes by LO as examples in these tissues. Using bioinformatics, including Ingenuity Pathway Analysis (IPA, differentially expressed genes were functionally categorized by their Gene Ontology (GO and biological function and network analysis, revealing their diverse functions and potential roles in LO-mediated effects in rat. Further IPA analysis in particular unraveled the presence of novel genes, such as Papd4, Or8k5, Gprc5b, Taar5, Trpc6, Pld2 and Onecut3 (up-regulated top molecules and Tnf, Slc45a4, Slc25a23 and Samt4 (down-regulated top molecules, to be influenced by LO treatment in the small intestine, spleen and
Kubo, Hiroko; Shibato, Junko; Saito, Tomomi; Ogawa, Tetsuo; Rakwal, Randeep; Shioda, Seiji
2015-01-01
The use of lavender oil (LO)--a commonly, used oil in aromatherapy, with well-defined volatile components linalool and linalyl acetate--in non-traditional medicine is increasing globally. To understand and demonstrate the potential positive effects of LO on the body, we have established an animal model in this current study, investigating the orally administered LO effects genome wide in the rat small intestine, spleen, and liver. The rats were administered LO at 5 mg/kg (usual therapeutic dose in humans) followed by the screening of differentially expressed genes in the tissues, using a 4×44-K whole-genome rat chip (Agilent microarray platform; Agilent Technologies, Palo Alto, CA, USA) in conjunction with a dye-swap approach, a novelty of this study. Fourteen days after LO treatment and compared with a control group (sham), a total of 156 and 154 up (≧ 1.5-fold)- and down (≦ 0.75-fold)-regulated genes, 174 and 66 up- (≧ 1.5-fold)- and down (≦ 0.75-fold)-regulated genes, and 222 and 322 up- (≧ 1.5-fold)- and down (≦ 0.75-fold)-regulated genes showed differential expression at the mRNA level in the small intestine, spleen and liver, respectively. The reverse transcription-polymerase chain reaction (RT-PCR) validation of highly up- and down-regulated genes confirmed the regulation of the Papd4, Lrp1b, Alb, Cyr61, Cyp2c, and Cxcl1 genes by LO as examples in these tissues. Using bioinformatics, including Ingenuity Pathway Analysis (IPA), differentially expressed genes were functionally categorized by their Gene Ontology (GO) and biological function and network analysis, revealing their diverse functions and potential roles in LO-mediated effects in rat. Further IPA analysis in particular unraveled the presence of novel genes, such as Papd4, Or8k5, Gprc5b, Taar5, Trpc6, Pld2 and Onecut3 (up-regulated top molecules) and Tnf, Slc45a4, Slc25a23 and Samt4 (down-regulated top molecules), to be influenced by LO treatment in the small intestine, spleen and liver
Biclustering of time series microarray data.
Meng, Jia; Huang, Yufei
2012-01-01
Clustering is a popular data exploration technique widely used in microarray data analysis. In this chapter, we review ideas and algorithms of bicluster and its applications in time series microarray analysis. We introduce first the concept and importance of biclustering and its different variations. We then focus our discussion on the popular iterative signature algorithm (ISA) for searching biclusters in microarray dataset. Next, we discuss in detail the enrichment constraint time-dependent ISA (ECTDISA) for identifying biologically meaningful temporal transcription modules from time series microarray dataset. In the end, we provide an example of ECTDISA application to time series microarray data of Kaposi's Sarcoma-associated Herpesvirus (KSHV) infection.
KUDRYAVTSEV Pavel Gennadievich
2015-08-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for description of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the methods of fractal theory. The method of fractal polymer can be applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
The Current Status of DNA Microarrays
Shi, Leming; Perkins, Roger G.; Tong, Weida
DNA microarray technology that allows simultaneous assay of thousands of genes in a single experiment has steadily advanced to become a mainstream method used in research, and has reached a stage that envisions its use in medical applications and personalized medicine. Many different strategies have been developed for manufacturing DNA microarrays. In this chapter, we discuss the manufacturing characteristics of seven microarray platforms that were used in a recently completed large study by the MicroArray Quality Control (MAQC) consortium, which evaluated the concordance of results across these platforms. The platforms can be grouped into three categories: (1) in situ synthesis of oligonucleotide probes on microarrays (Affymetrix GeneChip® arrays based on photolithography synthesis and Agilent's arrays based on inkjet synthesis); (2) spotting of presynthesized oligonucleotide probes on microarrays (GE Healthcare's CodeLink system, Applied Biosystems' Genome Survey Microarrays, and the custom microarrays printed with Operon's oligonucleotide set); and (3) deposition of presynthesized oligonucleotide probes on bead-based microarrays (Illumina's BeadChip microarrays). We conclude this chapter with our views on the challenges and opportunities toward acceptance of DNA microarray data in clinical and regulatory settings.
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-01
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
The porcelain crab transcriptome and PCAD, the porcelain crab microarray and sequence database.
Abderrahmane Tagmount
Full Text Available BACKGROUND: With the emergence of a completed genome sequence of the freshwater crustacean Daphnia pulex, construction of genomic-scale sequence databases for additional crustacean sequences are important for comparative genomics and annotation. Porcelain crabs, genus Petrolisthes, have been powerful crustacean models for environmental and evolutionary physiology with respect to thermal adaptation and understanding responses of marine organisms to climate change. Here, we present a large-scale EST sequencing and cDNA microarray database project for the porcelain crab Petrolisthes cinctipes. METHODOLOGY/PRINCIPAL FINDINGS: A set of approximately 30K unique sequences (UniSeqs representing approximately 19K clusters were generated from approximately 98K high quality ESTs from a set of tissue specific non-normalized and mixed-tissue normalized cDNA libraries from the porcelain crab Petrolisthes cinctipes. Homology for each UniSeq was assessed using BLAST, InterProScan, GO and KEGG database searches. Approximately 66% of the UniSeqs had homology in at least one of the databases. All EST and UniSeq sequences along with annotation results and coordinated cDNA microarray datasets have been made publicly accessible at the Porcelain Crab Array Database (PCAD, a feature-enriched version of the Stanford and Longhorn Array Databases. CONCLUSIONS/SIGNIFICANCE: The EST project presented here represents the third largest sequencing effort for any crustacean, and the largest effort for any crab species. Our assembly and clustering results suggest that our porcelain crab EST data set is equally diverse to the much larger EST set generated in the Daphnia pulex genome sequencing project, and thus will be an important resource to the Daphnia research community. Our homology results support the pancrustacea hypothesis and suggest that Malacostraca may be ancestral to Branchiopoda and Hexapoda. Our results also suggest that our cDNA microarrays cover as much of the
Diophantine approximation and badly approximable sets
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
The Porcelain Crab Transcriptome and PCAD, the Porcelain Crab Microarray and Sequence Database
Tagmount, Abderrahmane; Wang, Mei; Lindquist, Erika; Tanaka, Yoshihiro; Teranishi, Kristen S.; Sunagawa, Shinichi; Wong, Mike; Stillman, Jonathon H.
2010-01-27
Background: With the emergence of a completed genome sequence of the freshwater crustacean Daphnia pulex, construction of genomic-scale sequence databases for additional crustacean sequences are important for comparative genomics and annotation. Porcelain crabs, genus Petrolisthes, have been powerful crustacean models for environmental and evolutionary physiology with respect to thermal adaptation and understanding responses of marine organisms to climate change. Here, we present a large-scale EST sequencing and cDNA microarray database project for the porcelain crab Petrolisthes cinctipes. Methodology/Principal Findings: A set of ~;;30K unique sequences (UniSeqs) representing ~;;19K clusters were generated from ~;;98K high quality ESTs from a set of tissue specific non-normalized and mixed-tissue normalized cDNA libraries from the porcelain crab Petrolisthes cinctipes. Homology for each UniSeq was assessed using BLAST, InterProScan, GO and KEGG database searches. Approximately 66percent of the UniSeqs had homology in at least one of the databases. All EST and UniSeq sequences along with annotation results and coordinated cDNA microarray datasets have been made publicly accessible at the Porcelain Crab Array Database (PCAD), a feature-enriched version of the Stanford and Longhorn Array Databases.Conclusions/Significance: The EST project presented here represents the third largest sequencing effort for any crustacean, and the largest effort for any crab species. Our assembly and clustering results suggest that our porcelain crab EST data set is equally diverse to the much larger EST set generated in the Daphnia pulex genome sequencing project, and thus will be an important resource to the Daphnia research community. Our homology results support the pancrustacea hypothesis and suggest that Malacostraca may be ancestral to Branchiopoda and Hexapoda. Our results also suggest that our cDNA microarrays cover as much of the transcriptome as can reasonably be captured in
无
2010-01-01
In this paper, the geometric properties of a pair of line contact surfaces are investigated. Then, based on the observation that the cutter envelope surface contacts with the cutter surface and design surface along the characteristic curve and cutter contact (CC) path, respectively, a mathematical model describing the third-order approximation of the cutter envelope surface according to just one given cutter location (CL) is developed. It is shown that at the CC point both the normal curvature of the normal section of the cutter envelope surface and its derivative with respect to the arc length of the normal section can be determined by those of the cutter surface and design surface. This model characterizes the intrinsic relationship among the cutter surface, cutter envelope surface and design surface in the neighborhood of the CC point, and yields the mathematical foundation for optimally approximating the cutter envelope surface to the design surface by adjusting the cutter location.
Matveev, Alexei V; Rösch, Notker
2008-06-28
We suggest an approximate relativistic model for economical all-electron calculations on molecular systems that exploits an atomic ansatz for the relativistic projection transformation. With such a choice, the projection transformation matrix is by definition both transferable and independent of the geometry. The formulation is flexible with regard to the level at which the projection transformation is approximated; we employ the free-particle Foldy-Wouthuysen and the second-order Douglas-Kroll-Hess variants. The (atomic) infinite-order decoupling scheme shows little effect on structural parameters in scalar-relativistic calculations; also, the use of a screened nuclear potential in the definition of the projection transformation shows hardly any effect in the context of the present work. Applications to structural and energetic parameters of various systems (diatomics AuH, AuCl, and Au(2), two structural isomers of Ir(4), and uranyl dication UO(2) (2+) solvated by 3-6 water ligands) show that the atomic approximation to the conventional second-order Douglas-Kroll-Hess projection (ADKH) transformation yields highly accurate results at substantial computational savings, in particular, when calculating energy derivatives of larger systems. The size-dependence of the intrinsic error of the ADKH method in extended systems of heavy elements is analyzed for the atomization energies of Pd(n) clusters (n
VIPR: A probabilistic algorithm for analysis of microbial detection microarrays
Holbrook Michael R
2010-07-01
Full Text Available Abstract Background All infectious disease oriented clinical diagnostic assays in use today focus on detecting the presence of a single, well defined target agent or a set of agents. In recent years, microarray-based diagnostics have been developed that greatly facilitate the highly parallel detection of multiple microbes that may be present in a given clinical specimen. While several algorithms have been described for interpretation of diagnostic microarrays, none of the existing approaches is capable of incorporating training data generated from positive control samples to improve performance. Results To specifically address this issue we have developed a novel interpretive algorithm, VIPR (Viral Identification using a PRobabilistic algorithm, which uses Bayesian inference to capitalize on empirical training data to optimize detection sensitivity. To illustrate this approach, we have focused on the detection of viruses that cause hemorrhagic fever (HF using a custom HF-virus microarray. VIPR was used to analyze 110 empirical microarray hybridizations generated from 33 distinct virus species. An accuracy of 94% was achieved as measured by leave-one-out cross validation. Conclusions VIPR outperformed previously described algorithms for this dataset. The VIPR algorithm has potential to be broadly applicable to clinical diagnostic settings, wherein positive controls are typically readily available for generation of training data.
Leike, Reimar H
2016-01-01
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a ranking function that quantifies how "embarrassing" it is to communicate a given approximation. We show that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. We find that this ranking is equivalent to the Kullback-Leibler divergence that is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. We hope that our elementary derivation settles the apparent confusion. We show for example that when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many su...
唐功友; 王海红
2005-01-01
A successive approximation approach for designing optimal controllers is presented for discrete linear time-delay systems with a quadratic performance index. By using the successive approximation approach, the original optimal control problem is transformed into a sequence of nonhomogeneous linear two-point boundary value (TPBV) problems without time-delay and timeadvance terms. The optimal control law obtained consists of an accurate feedback terms and a time-delay compensation term which is the limit of the solution sequence of the adjoint equations.By using a finite-step iteration of the time-delay compensation term of the optimal solution sequence,a suboptimal control law is obtained. Simulation examples axe employed to test the validity of the proposed approach.
Subtype Identification of Avian Influenza Virus on DNA Microarray
WANG Xiu-rong; YU Kang-zhen; DENG Guo-hua; SHI Rui; LIU Li-ling; QIAO Chuan-ling; BAO Hong-mei; KONG Xian-gang; CHEN Hua-lan
2005-01-01
We have developed a rapid microarray-based assay for the reliable detection of H5, H7 and H9 subtypes of avian influenza virus (AIV). The strains used in the experiment were A/Goose/Guangdong/1/96 (H5N1), A/African starling/983/79 (H7N1) and A/Turkey/Wiscosin/1/66 (H9N2). The capture DNAs clones which encoding approximate 500-bp avian influenza virus gene fragments obtained by RT-PCR, were spotted on a slide-bound microarray. Cy5-1abeled fluorescent cDNAs,which generated from virus RNA during reverse transcription were hybridized to these capture DNAs. These capture DNAs contained multiple fragments of the hemagglutinin and matrix protein genes of AIV respectively, for subtyping and typing AIV. The arrays were scanned to determine the probe binding sites. The hybridization pattern agreed approximately with the known grid location of each target. The results show that DNA microarray technology provides a useful diagnostic method for AIV.
Surface characterization of carbohydrate microarrays.
Scurr, David J; Horlacher, Tim; Oberli, Matthias A; Werz, Daniel B; Kroeck, Lenz; Bufali, Simone; Seeberger, Peter H; Shard, Alexander G; Alexander, Morgan R
2010-11-16
Carbohydrate microarrays are essential tools to determine the biological function of glycans. Here, we analyze a glycan array by time-of-flight secondary ion mass spectrometry (ToF-SIMS) to gain a better understanding of the physicochemical properties of the individual spots and to improve carbohydrate microarray quality. The carbohydrate microarray is prepared by piezo printing of thiol-terminated sugars onto a maleimide functionalized glass slide. The hyperspectral ToF-SIMS imaging data are analyzed by multivariate curve resolution (MCR) to discern secondary ions from regions of the array containing saccharide, linker, salts from the printing buffer, and the background linker chemistry. Analysis of secondary ions from the linker common to all of the sugar molecules employed reveals a relatively uniform distribution of the sugars within the spots formed from solutions with saccharide concentration of 0.4 mM and less, whereas a doughnut shape is often formed at higher-concentration solutions. A detailed analysis of individual spots reveals that in the larger spots the phosphate buffered saline (PBS) salts are heterogeneously distributed, apparently resulting in saccharide concentrated at the rim of the spots. A model of spot formation from the evaporating sessile drop is proposed to explain these observations. Saccharide spot diameters increase with saccharide concentration due to a reduction in surface tension of the saccharide solution compared to PBS. The multivariate analytical partial least squares (PLS) technique identifies ions from the sugars that in the complex ToF-SIMS spectra correlate with the binding of galectin proteins.
Bamgbose, M. K.; Adebambo, P. O.; Badmus, B. S.; Dare, E. O.; Akinlami, J. O.; Adebayo, G. A.
2016-08-01
Detailed first-principle calculations of properties in zinc blende quaternary alloy BxAlyIn1-x-yN at various concentrations are investigated using density functional theory (DFT) within virtual crystal approximation (VCA) implemented in alchemical mixing approximation. The calculated bandgaps show direct transitions at Γ-Γ and indirect transitions at Γ-X, which are opened by increasing boron concentration. The density of state (DOS) revealed upper valence band (VB1) domination by p-states atoms, while s-states dominate the lower valence band (VB2); also, the DOS shows the contribution of d-states to the conduction band. The first critical point in the dielectric constant ranges between 0.07-4.47 eV and is due to the first threshold optical transitions in the energy bandgap. Calculated static dielectric function (DF) 𝜖1(0) is between 5.15 and 10.35, an indication that small energy bandgaps yield large static DFs. The present results indicate ZB-BxAlyIn1-x-yN alloys are suitable candidates of deep ultraviolet light emitting diodes (LEDs), laser diodes (LDs) and modern solar cell since the concentrations x and y make the bandgap and lattice constant of ZB-BxAlyIn1-x-yN quaternary alloys tunable to desirable values.
Spotting effect in microarray experiments
Mary-Huard Tristan
2004-05-01
Full Text Available Abstract Background Microarray data must be normalized because they suffer from multiple biases. We have identified a source of spatial experimental variability that significantly affects data obtained with Cy3/Cy5 spotted glass arrays. It yields a periodic pattern altering both signal (Cy3/Cy5 ratio and intensity across the array. Results Using the variogram, a geostatistical tool, we characterized the observed variability, called here the spotting effect because it most probably arises during steps in the array printing procedure. Conclusions The spotting effect is not appropriately corrected by current normalization methods, even by those addressing spatial variability. Importantly, the spotting effect may alter differential and clustering analysis.
Ontology-Based Analysis of Microarray Data.
Giuseppe, Agapito; Milano, Marianna
2016-01-01
The importance of semantic-based methods and algorithms for the analysis and management of biological data is growing for two main reasons. From a biological side, knowledge contained in ontologies is more and more accurate and complete, from a computational side, recent algorithms are using in a valuable way such knowledge. Here we focus on semantic-based management and analysis of protein interaction networks referring to all the approaches of analysis of protein-protein interaction data that uses knowledge encoded into biological ontologies. Semantic approaches for studying high-throughput data have been largely used in the past to mine genomic and expression data. Recently, the emergence of network approaches for investigating molecular machineries has stimulated in a parallel way the introduction of semantic-based techniques for analysis and management of network data. The application of these computational approaches to the study of microarray data can broad the application scenario of them and simultaneously can help the understanding of disease development and progress.
Rašin, Andrija
1994-01-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
On Element SDD Approximability
Avron, Haim; Toledo, Sivan
2009-01-01
This short communication shows that in some cases scalar elliptic finite element matrices cannot be approximated well by an SDD matrix. We also give a theoretical analysis of a simple heuristic method for approximating an element by an SDD matrix.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
FPGA based system for automatic cDNA microarray image processing.
Belean, Bogdan; Borda, Monica; Le Gal, Bertrand; Terebes, Romulus
2012-07-01
Automation is an open subject in DNA microarray image processing, aiming reliable gene expression estimation. The paper presents a novel shock filter based approach for automatic microarray grid alignment. The proposed method brings up significantly reduced computational complexity compared to state of the art approaches, while similar results in terms of accuracy are achieved. Based on this approach, we also propose an FPGA based system for microarray image analysis that eliminates the shortcomings of existing software platforms: user intervention, increased computational time and cost. Our system includes application-specific architectures which involve algorithm parallelization, aiming fast and automated cDNA microarray image processing. The proposed automated image processing chain is implemented both on a general purpose processor and using the developed hardware architectures as co-processors in a FPGA based system. The comparative results included in the last section show that an important gain in terms of computational time is obtained using hardware based implementations.
Living Cell Microarrays: An Overview of Concepts
Rebecca Jonczyk
2016-05-01
Full Text Available Living cell microarrays are a highly efficient cellular screening system. Due to the low number of cells required per spot, cell microarrays enable the use of primary and stem cells and provide resolution close to the single-cell level. Apart from a variety of conventional static designs, microfluidic microarray systems have also been established. An alternative format is a microarray consisting of three-dimensional cell constructs ranging from cell spheroids to cells encapsulated in hydrogel. These systems provide an in vivo-like microenvironment and are preferably used for the investigation of cellular physiology, cytotoxicity, and drug screening. Thus, many different high-tech microarray platforms are currently available. Disadvantages of many systems include their high cost, the requirement of specialized equipment for their manufacture, and the poor comparability of results between different platforms. In this article, we provide an overview of static, microfluidic, and 3D cell microarrays. In addition, we describe a simple method for the printing of living cell microarrays on modified microscope glass slides using standard DNA microarray equipment available in most laboratories. Applications in research and diagnostics are discussed, e.g., the selective and sensitive detection of biomarkers. Finally, we highlight current limitations and the future prospects of living cell microarrays.
Dimension reduction methods for microarray data: a review
Rabia Aziz
2017-03-01
Full Text Available Dimension reduction has become inevitable for pre-processing of high dimensional data. “Gene expression microarray data” is an instance of such high dimensional data. Gene expression microarray data displays the maximum number of genes (features simultaneously at a molecular level with a very small number of samples. The copious numbers of genes are usually provided to a learning algorithm for producing a complete characterization of the classification task. However, most of the times the majority of the genes are irrelevant or redundant to the learning task. It will deteriorate the learning accuracy and training speed as well as lead to the problem of overfitting. Thus, dimension reduction of microarray data is a crucial preprocessing step for prediction and classification of disease. Various feature selection and feature extraction techniques have been proposed in the literature to identify the genes, that have direct impact on the various machine learning algorithms for classification and eliminate the remaining ones. This paper describes the taxonomy of dimension reduction methods with their characteristics, evaluation criteria, advantages and disadvantages. It also presents a review of numerous dimension reduction approaches for microarray data, mainly those methods that have been proposed over the past few years.
A methodology for global validation of microarray experiments
Sladek Robert
2006-07-01
Full Text Available Abstract Background DNA microarrays are popular tools for measuring gene expression of biological samples. This ever increasing popularity is ensuring that a large number of microarray studies are conducted, many of which with data publicly available for mining by other investigators. Under most circumstances, validation of differential expression of genes is performed on a gene to gene basis. Thus, it is not possible to generalize validation results to the remaining majority of non-validated genes or to evaluate the overall quality of these studies. Results We present an approach for the global validation of DNA microarray experiments that will allow researchers to evaluate the general quality of their experiment and to extrapolate validation results of a subset of genes to the remaining non-validated genes. We illustrate why the popular strategy of selecting only the most differentially expressed genes for validation generally fails as a global validation strategy and propose random-stratified sampling as a better gene selection method. We also illustrate shortcomings of often-used validation indices such as overlap of significant effects and the correlation coefficient and recommend the concordance correlation coefficient (CCC as an alternative. Conclusion We provide recommendations that will enhance validity checks of microarray experiments while minimizing the need to run a large number of labour-intensive individual validation assays.
Electrostatic readout of DNA microarrays with charged microspheres
Clack, Nathan G. [Univ. of California, Berkeley, CA (United States). Biophysics Graduate Group; Salaita, Khalid [Univ. of California, Berkeley, CA (United States). Department of Chemistry; Groves, Jay T. [Univ. of California, Berkeley, CA (United States). Biophysics Graduate Group and Department of Chemistry
2008-06-29
DNA microarrays are used for gene-expression profiling, single-nucleotide polymorphism detection and disease diagnosis. A persistent challenge in this area is the lack of microarray screening technology suitable for integration into routine clinical care. In this paper, we describe a method for sensitive and label-free electrostatic readout of DNA or RNA hybridization on microarrays. The electrostatic properties of the microarray are measured from the position and motion of charged microspheres randomly dispersed over the surface. We demonstrate nondestructive electrostatic imaging with 10-μm lateral resolution over centimeter-length scales, which is four-orders of magnitude larger than that achievable with conventional scanning electrostatic force microscopy. Changes in surface charge density as a result of specific hybridization can be detected and quantified with 50-pM sensitivity, single base-pair mismatch selectivity and in the presence of complex background. Lastly, because the naked eye is sufficient to read out hybridization, this approach may facilitate broad application of multiplexed assays.
Approximation of distributed delays
Lu, Hao; Eberard, Damien; Simon, Jean-Pierre
2010-01-01
We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.
Diophantine approximation and badly approximable sets
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m. We consider `natural' classes of badly approximable subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X...
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Reinforcement Learning via AIXI Approximation
Veness, Joel; Hutter, Marcus; Silver, David
2010-01-01
This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.
Approximating Graphic TSP by Matchings
Mömke, Tobias
2011-01-01
We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.
Differential splicing using whole-transcript microarrays
Robinson Mark D
2009-05-01
Full Text Available Abstract Background The latest generation of Affymetrix microarrays are designed to interrogate expression over the entire length of every locus, thus giving the opportunity to study alternative splicing genome-wide. The Exon 1.0 ST (sense target platform, with versions for Human, Mouse and Rat, is designed primarily to probe every known or predicted exon. The smaller Gene 1.0 ST array is designed as an expression microarray but still interrogates expression with probes along the full length of each well-characterized transcript. We explore the possibility of using the Gene 1.0 ST platform to identify differential splicing events. Results We propose a strategy to score differential splicing by using the auxiliary information from fitting the statistical model, RMA (robust multichip analysis. RMA partitions the probe-level data into probe effects and expression levels, operating robustly so that if a small number of probes behave differently than the rest, they are downweighted in the fitting step. We argue that adjacent poorly fitting probes for a given sample can be evidence of differential splicing and have designed a statistic to search for this behaviour. Using a public tissue panel dataset, we show many examples of tissue-specific alternative splicing. Furthermore, we show that evidence for putative alternative splicing has a strong correspondence between the Gene 1.0 ST and Exon 1.0 ST platforms. Conclusion We propose a new approach, FIRMAGene, to search for differentially spliced genes using the Gene 1.0 ST platform. Such an analysis complements the search for differential expression. We validate the method by illustrating several known examples and we note some of the challenges in interpreting the probe-level data. Software implementing our methods is freely available as an R package.
Approximate Reanalysis in Topology Optimization
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Romualdi, Chiara; Trevisan, Silvia; Celegato, Barbara; Costa, Germano; Lanfranchi, Gerolamo
2003-01-01
The variability of results in microarray technology is in part due to the fact that independent scans of a single hybridised microarray give spot images that are not quite the same. To solve this problem and turn it to our advantage, we introduced the approach of multiple scanning and of image integration of microarrays. To this end, we have developed specific software that creates a virtual image that statistically summarises a series of consecutive scans of a microarray. We provide evidence that the use of multiple imaging (i) enhances the detection of differentially expressed genes; (ii) increases the image homogeneity; and (iii) reveals false-positive results such as differentially expressed genes that are detected by a single scan but not confirmed by successive scanning replicates. The increase in the final number of differentially expressed genes detected in a microarray experiment with this approach is remarkable; 50% more for microarrays hybridised with targets labelled by reverse transcriptase, and 200% more for microarrays developed with the tyramide signal amplification (TSA) technique. The results have been confirmed by semi-quantitative RT–PCR tests. PMID:14627839
Mallén, Maria; Díaz-González, María; Bonilla, Diana; Salvador, Juan P; Marco, María P; Baldi, Antoni; Fernández-Sánchez, César
2014-06-17
Low-density protein microarrays are emerging tools in diagnostics whose deployment could be primarily limited by the cost of fluorescence detection schemes. This paper describes an electrical readout system of microarrays comprising an array of gold interdigitated microelectrodes and an array of polydimethylsiloxane microwells, which enabled multiplexed detection of up to thirty six biological events on the same substrate. Similarly to fluorescent readout counterparts, the microarray can be developed on disposable glass slide substrates. However, unlike them, the presented approach is compact and requires a simple and inexpensive instrumentation. The system makes use of urease labeled affinity reagents for developing the microarrays and is based on detection of conductivity changes taking place when ionic species are generated in solution due to the catalytic hydrolysis of urea. The use of a polydimethylsiloxane microwell array facilitates the positioning of the measurement solution on every spot of the microarray. Also, it ensures the liquid tightness and isolation from the surrounding ones during the microarray readout process, thereby avoiding evaporation and chemical cross-talk effects that were shown to affect the sensitivity and reliability of the system. The performance of the system is demonstrated by carrying out the readout of a microarray for boldenone anabolic androgenic steroid hormone. Analytical results are comparable to those obtained by fluorescent scanner detection approaches. The estimated detection limit is 4.0 ng mL(-1), this being below the threshold value set by the World Anti-Doping Agency and the European Community.
Investigating amoebic pathogenesis using Entamoeba histolytica DNA microarrays
Upinder Singh; Preetam Shah
2002-11-01
Entamoeba histolytica, a protozoan parasite, causes diarrhea and liver abscesses resulting in 50 million cases of infection worldwide annually. Elucidation of parasite virulence determinants has recently been investigated using genetic approaches. We have undertaken a genomics approach to identify novel virulence determinants in the parasite. A DNA microarray of E. histolytica is being developed based on sequenced genomic clones from the genome sequencing efforts of The Institute of Genomic Research (TIGR) and the Sanger Center. Hybridization of the slides with samples labelled differentially using fluorescent dyes allows the characterization of transcriptional profiles of genes under the biological conditions tested. Additionally, a genome-wide comparison of E. histolytica and E. dispar can be undertaken. The development of an E. histolytica microarray will be outlined and its uses in identifying novel virulence determinants and characterizing amoebic biology will be discussed.
Beaudoing Emmanuel
2006-09-01
Full Text Available Abstract Background High throughput gene expression profiling (GEP is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option. GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. Results MAF (MicroArray Facility is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking, data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. Conclusion MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for
唐功友; 孙亮
2005-01-01
The optimal control problem for nonlinear interconnected large-scale dynamic systems is considered. A successive approximation approach for designing the optimal controller is proposed with respect to quadratic performance indexes. By using the approach, the high order, coupling,nonlinear two-point boundary value (TPBV) problem is transformed into a sequence of linear decoupling TPBV problems. It is proven that the TPBV problem sequence uniformly converges to the optimal control for nonlinear interconnected large-scale systems. A suboptimal control law is obtained by using a finite iterative result of the optimal control sequence.
Gel-forming reagents and uses thereof for preparing microarrays
Golova, Julia; Chernov, Boris; Perov, Alexander
2010-11-09
New gel-forming reagents including monomers and cross-linkers, which can be applied to gel-drop microarray manufacturing by using co-polymerization approaches are disclosed. Compositions for the preparation of co-polymerization mixtures with new gel-forming monomers and cross-linker reagents are described herein. New co-polymerization compositions and cross-linkers with variable length linker groups between unsaturated C.dbd.C bonds that participate in the formation of gel networks are disclosed.
Feldman, G.; Fulton, T.; Liaw, S. S.; Lindgren, I.
1990-02-01
The results from two approaches, those from the coupled cluster expansion (CCA), and those from the Green's function formalism (GFF), are compared in perturbation theory. The atoms discussed consist of a nondegenerate core, plus or minus two electrons (two particle (2P) and two hole (2H) systems), such that the resulting atoms also have non-degenerate ground states. The specific cases considered are the He++ -He pair through third order, and, very briefly, the Be++ -Be pair in second order. The corresponding 2-electron non-relativistic (NR) Bethe-Salpeter (BS) Green's functions are 0-, 2-, or 4-electron (rather than just 0-electron, i.e., vacuum) expectation values. The general equivalence of the various approaches is demonstrated in detail for He: On the one hand, the 2P and 2H versions of the CCA are shown to give the same result for the ground state energy in these orders, provided the same atom is described in both versions. On the other hand, the CCA and the GFF are shown to yield equal results.
Pineal function: impact of microarray analysis
Klein, David C; Bailey, Michael J; Carter, David A
2009-01-01
Microarray analysis has provided a new understanding of pineal function by identifying genes that are highly expressed in this tissue relative to other tissues and also by identifying over 600 genes that are expressed on a 24-h schedule. This effort has highlighted surprising similarity...... foundation that microarray analysis has provided will broadly support future research on pineal function....
The EADGENE Microarray Data Analysis Workshop
Koning, de D.J.; Jaffrezic, F.; Lund, M.S.; Watson, M.; Channing, C.; Hulsegge, B.; Pool, M.H.; Buitenhuis, B.; Hedegaard, J.; Hornshoj, H.; Sorensen, P.; Marot, G.; Delmas, C.; Lê Cao, K.A.; San Cristobal, M.; Baron, M.D.; Malinverni, R.; Stella, A.; Brunner, R.M.; Seyfert, H.M.; Jensen, K.; Mouzaki, D.; Waddington, D.; Jiménez-Marín, A.; Perez-Alegre, M.; Perez-Reinado, E.; Closset, R.; Detilleux, J.C.; Dovc, P.; Lavric, M.; Nie, H.; Janss, L.
2007-01-01
Microarray analyses have become an important tool in animal genomics. While their use is becoming widespread, there is still a lot of ongoing research regarding the analysis of microarray data. In the context of a European Network of Excellence, 31 researchers representing 14 research groups from 10
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Achieser, N I
2004-01-01
A pioneer of many modern developments in approximation theory, N. I. Achieser designed this graduate-level text from the standpoint of functional analysis. The first two chapters address approximation problems in linear normalized spaces and the ideas of P. L. Tchebysheff. Chapter III examines the elements of harmonic analysis, and Chapter IV, integral transcendental functions of the exponential type. The final two chapters explore the best harmonic approximation of functions and Wiener's theorem on approximation. Professor Achieser concludes this exemplary text with an extensive section of pr
Seeded Bayesian Networks: Constructing genetic networks from microarray data
Quackenbush John
2008-07-01
Full Text Available Abstract Background DNA microarrays and other genomics-inspired technologies provide large datasets that often include hidden patterns of correlation between genes reflecting the complex processes that underlie cellular metabolism and physiology. The challenge in analyzing large-scale expression data has been to extract biologically meaningful inferences regarding these processes – often represented as networks – in an environment where the datasets are often imperfect and biological noise can obscure the actual signal. Although many techniques have been developed in an attempt to address these issues, to date their ability to extract meaningful and predictive network relationships has been limited. Here we describe a method that draws on prior information about gene-gene interactions to infer biologically relevant pathways from microarray data. Our approach consists of using preliminary networks derived from the literature and/or protein-protein interaction data as seeds for a Bayesian network analysis of microarray results. Results Through a bootstrap analysis of gene expression data derived from a number of leukemia studies, we demonstrate that seeded Bayesian Networks have the ability to identify high-confidence gene-gene interactions which can then be validated by comparison to other sources of pathway data. Conclusion The use of network seeds greatly improves the ability of Bayesian Network analysis to learn gene interaction networks from gene expression data. We demonstrate that the use of seeds derived from the biomedical literature or high-throughput protein-protein interaction data, or the combination, provides improvement over a standard Bayesian Network analysis, allowing networks involving dynamic processes to be deduced from the static snapshots of biological systems that represent the most common source of microarray data. Software implementing these methods has been included in the widely used TM4 microarray analysis package.
Workflows for microarray data processing in the Kepler environment
Stropp Thomas
2012-05-01
traditional shell scripting or R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. Conclusions We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.
High quality protein microarray using in situ protein purification
Fleischmann Robert D
2009-08-01
protein solubility and denaturation problems caused by buffer exchange steps and freeze-thaw cycles, which are associated with resin-based purification, intermittent protein storage and deposition on microarrays. Conclusion An optimized platform for in situ protein purification on microarray slides using His-tagged recombinant proteins is a desirable tool for the screening of novel protein functions and protein-protein interactions. In the context of immunoproteomics, such protein microarrays are complimentary to approaches using non-recombinant methods to discover and characterize bacterial antigens.
Rational approximation of vertical segments
Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte
2007-08-01
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
Statistical tests for differential expression in cDNA microarray experiments
Cui, Xiangqin; Churchill, Gary A.
2003-01-01
Extracting biological information from microarray data requires appropriate statistical methods. The simplest statistical method for detecting differential expression is the t test, which can be used to compare two conditions when there is replication of samples. With more than two conditions, analysis of variance (ANOVA) can be used, and the mixed ANOVA model is a general and powerful approach for microarray experiments with multiple factors and/or several sources of variation.
Ontology-based, Tissue MicroArray oriented, image centered tissue bank
Viti Federica; Merelli Ivan; Caprera Andrea; Lazzari Barbara; Stella Alessandra; Milanesi Luciano
2008-01-01
Abstract Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their util...
Expectation Consistent Approximate Inference
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Approximate Modified Policy Iteration
Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu
2012-01-01
Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...
Interpreting microarray data to build models of microbial genetic regulation networks
Sokhansanj, Bahrad A.; Garnham, Janine B.; Fitch, J. Patrick
2002-06-01
Microarrays and DNA chips are an efficient, high-throughput technology for measuring temporal changes in the expression of message RNA (mRNA) from thousands of genes (often the entire genome of an organism) in a single experiment. A crucial drawback of microarray experiments is that results are inherently qualitative: data are generally neither quantitatively repeatable, nor may microarray spot intensities be calibrated to in vivo mRNA concentrations. Nevertheless, microarrays represent by the far the cheapest and fastest way to obtain information about a cell's global genetic regulatory networks. Besides poor signal characteristics, the massive number of data produced by microarray experiments pose challenges for visualization, interpretation and model building. Towards initial model development, we have developed a Java tool for visualizing the spatial organization of gene expression in bacteria. We are also developing an approach to inferring and testing qualitative fuzzy logic models of gene regulation using microarray data. Because we are developing and testing qualitative hypotheses that do not require quantitative precision, our statistical evaluation of experimental data is limited to checking for validity and consistency. Our goals are to maximize the impact of inexpensive microarray technology, bearing in mind that biological models and hypotheses are typically qualitative.
Interpreting Microarray Data to Build Models of Microbial Genetic Regulation Networks
Sokhansanj, B; Garnham, J B; Fitch, J P
2002-01-23
Microarrays and DNA chips are an efficient, high-throughput technology for measuring temporal changes in the expression of message RNA (mRNA) from thousands of genes (often the entire genome of an organism) in a single experiment. A crucial drawback of microarray experiments is that results are inherently qualitative: data are generally neither quantitatively repeatable, nor may microarray spot intensities be calibrated to in vivo mRNA concentrations. Nevertheless, microarrays represent by the far the cheapest and fastest way to obtain information about a cells global genetic regulatory networks. Besides poor signal characteristics, the massive number of data produced by microarray experiments poses challenges for visualization, interpretation and model building. Towards initial model development, we have developed a Java tool for visualizing the spatial organization of gene expression in bacteria. We are also developing an approach to inferring and testing qualitative fuzzy logic models of gene regulation using microarray data. Because we are developing and testing qualitative hypotheses that do not require quantitative precision, our statistical evaluation of experimental data is limited to checking for validity and consistency. Our goals are to maximize the impact of inexpensive microarray technology, bearing in mind that biological models and hypotheses are typically qualitative.
Universal ligation-detection-reaction microarray applied for compost microbes
Romantschuk Martin
2008-12-01
Full Text Available Abstract Background Composting is one of the methods utilised in recycling organic communal waste. The composting process is dependent on aerobic microbial activity and proceeds through a succession of different phases each dominated by certain microorganisms. In this study, a ligation-detection-reaction (LDR based microarray method was adapted for species-level detection of compost microbes characteristic of each stage of the composting process. LDR utilises the specificity of the ligase enzyme to covalently join two adjacently hybridised probes. A zip-oligo is attached to the 3'-end of one probe and fluorescent label to the 5'-end of the other probe. Upon ligation, the probes are combined in the same molecule and can be detected in a specific location on a universal microarray with complementary zip-oligos enabling equivalent hybridisation conditions for all probes. The method was applied to samples from Nordic composting facilities after testing and optimisation with fungal pure cultures and environmental clones. Results Probes targeted for fungi were able to detect 0.1 fmol of target ribosomal PCR product in an artificial reaction mixture containing 100 ng competing fungal ribosomal internal transcribed spacer (ITS area or herring sperm DNA. The detection level was therefore approximately 0.04% of total DNA. Clone libraries were constructed from eight compost samples. The LDR microarray results were in concordance with the clone library sequencing results. In addition a control probe was used to monitor the per-spot hybridisation efficiency on the array. Conclusion This study demonstrates that the LDR microarray method is capable of sensitive and accurate species-level detection from a complex microbial community. The method can detect key species from compost samples, making it a basis for a tool for compost process monitoring in industrial facilities.
Approximate and renormgroup symmetries
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximating Stationary Statistical Properties
Xiaoming WANG
2009-01-01
It is well-known that physical laws for large chaotic dynamical systems are revealed statistically. Many times these statistical properties of the system must be approximated numerically. The main contribution of this manuscript is to provide simple and natural criterions on numerical methods (temporal and spatial discretization) that are able to capture the stationary statistical properties of the underlying dissipative chaotic dynamical systems asymptotically. The result on temporal approximation is a recent finding of the author, and the result on spatial approximation is a new one. Applications to the infinite Prandtl number model for convection and the barotropic quasi-geostrophic model are also discussed.
Use of non-amplified RNA samples for microarray analysis of gene expression.
Hiroko Sudo
Full Text Available Demand for high quality gene expression data has driven the development of revolutionary microarray technologies. The quality of the data is affected by the performance of the microarray platform as well as how the nucleic acid targets are prepared. The most common method for target nucleic acid preparation includes in vitro transcription amplification of the sample RNA. Although this method requires a small amount of starting material and is reported to have high reproducibility, there are also technical disadvantages such as amplification bias and the long, laborious protocol. Using RNA derived from human brain, breast and colon, we demonstrate that a non-amplification method, which was previously shown to be inferior, could be transformed to a highly quantitative method with a dynamic range of five orders of magnitude. Furthermore, the correlation coefficient calculated by comparing microarray assays using non-amplified samples with qRT-PCR assays was approximately 0.9, a value much higher than when samples were prepared using amplification methods. Our results were also compared with data from various microarray platforms studied in the MicroArray Quality Control (MAQC project. In combination with micro-columnar 3D-Gene™ microarray, this non-amplification method is applicable to a variety of genetic analyses, including biomarker screening and diagnostic tests for cancer.
Review: DNA microarray technology and drug development
Sana Khan
2010-01-01
Full Text Available On the contrary to slow and non specific traditional drug discovery methods, DNA microarray technology could accelerate the identification of potential drugs for treating diseases like cancer, AIDS and provide fruitful results in the drug discovery. The technique provides efficient automation and maximum flexibility to the researchers and can test thousand compounds at a time. Scientists find DNA microarray useful in disease diagnosis, monitoring desired and adverse outcomes of therapeutic interventions, as well as, in the selection, assessment and quality con-trol of the potential drugs. In the current scenario, where new pathogens are expected every year, DNA microarray promises as an efficient technology to detect new organisms in a short time. Classification of carcinomas at the molecular level and prediction of how various types of tumor respond to different therapeutic agents can be made possible with the use of microarray analysis. Also, microarray technique can prove instrumental in personalized medicines development by providing microarray data of a patient which could be used for identifying diseases, treatment specific to individual and trailing disease prognosis. Microarray analysis could be beneficial in the area of molecular medicines for analysis of genetic variations and functions of genes in normal individuals and diseased conditions. The technique can give satisfactory results in single nucleotide polymorphism (SNP analysis and pharmacogenomics studies. The challenges that arise with the technology are high degree of variability with data obtained, frequent up gradation of methods and machines and lack of trained manpower. Despite this, DNA micro-array promises to be the next generation sequencer which could explain how organisms evolve and adapt looking at the whole genome. In a nutshell, Microarray technology makes it possible for molecular biologists to analyze simultaneously thousands of DNA samples and monitor their
Malvina Baica
1985-01-01
Full Text Available The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF, and defines it as Generalized Euclidean Algorithm (abbr. GEA to approximate irrationals.
Approximations in Inspection Planning
Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.
2000-01-01
Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....
Approximations in Inspection Planning
Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.
2000-01-01
Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Approximation Behooves Calibration
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
Diagnostic and analytical applications of protein microarrays
Dufva, Hans Martin; Christensen, C.B.V.
2005-01-01
years. A genome-scale protein microarray has been demonstrated for identifying protein-protein interactions as well as for rapid identification of protein binding to a particular drug. Furthermore, protein microarrays have been shown as an efficient tool in cancer profiling, detection of bacteria...... and toxins, identification of allergen reactivity and autoantibodies. They have also demonstrated the ability to measure the absolute concentration of small molecules. Besides their capacity for parallel diagnostics, microarrays can be more sensitive than traditional methods such as enzyme...... and be amenable to automation or integrated into easy-to-use systems, such as micrototal analysis systems or point-of-care devices....
Microarrays--analysis of signaling pathways.
Ramachandran, Anassuya; Black, Michael A; Shelling, Andrew N; Love, Donald R
2008-01-01
Microarrays provide a powerful means of analyzing the expression level of multiple transcripts in two sample populations. In this study, we have used microarray technology to identify genes that are differentially regulated in response to activin-treated ovarian cancer cells. We find a number of biologically relevant genes that are involved in regulating activin signaling and genes potentially contributing to activin-mediated growth arrest appear to be differentially regulated. Thus, microarrays are an important tool for dissecting gene expression changes in normal physiological processes and disease.
DNA Microarrays in Herbal Drug Research
Preeti Chavan
2006-01-01
Full Text Available Natural products are gaining increased applications in drug discovery and development. Being chemically diverse they are able to modulate several targets simultaneously in a complex system. Analysis of gene expression becomes necessary for better understanding of molecular mechanisms. Conventional strategies for expression profiling are optimized for single gene analysis. DNA microarrays serve as suitable high throughput tool for simultaneous analysis of multiple genes. Major practical applicability of DNA microarrays remains in DNA mutation and polymorphism analysis. This review highlights applications of DNA microarrays in pharmacodynamics, pharmacogenomics, toxicogenomics and quality control of herbal drugs and extracts.
Instance-based concept learning from multiclass DNA microarray data
Dubitzky Werner
2006-02-01
Full Text Available Abstract Background Various statistical and machine learning methods have been successfully applied to the classification of DNA microarray data. Simple instance-based classifiers such as nearest neighbor (NN approaches perform remarkably well in comparison to more complex models, and are currently experiencing a renaissance in the analysis of data sets from biology and biotechnology. While binary classification of microarray data has been extensively investigated, studies involving multiclass data are rare. The question remains open whether there exists a significant difference in performance between NN approaches and more complex multiclass methods. Comparative studies in this field commonly assess different models based on their classification accuracy only; however, this approach lacks the rigor needed to draw reliable conclusions and is inadequate for testing the null hypothesis of equal performance. Comparing novel classification models to existing approaches requires focusing on the significance of differences in performance. Results We investigated the performance of instance-based classifiers, including a NN classifier able to assign a degree of class membership to each sample. This model alleviates a major problem of conventional instance-based learners, namely the lack of confidence values for predictions. The model translates the distances to the nearest neighbors into 'confidence scores'; the higher the confidence score, the closer is the considered instance to a pre-defined class. We applied the models to three real gene expression data sets and compared them with state-of-the-art methods for classifying microarray data of multiple classes, assessing performance using a statistical significance test that took into account the data resampling strategy. Simple NN classifiers performed as well as, or significantly better than, their more intricate competitors. Conclusion Given its highly intuitive underlying principles – simplicity
Boesten, R.J.; Schuren, F.H.; Vos, de W.M.
2009-01-01
A genomic DNA-based microarray was constructed containing over 6000 randomly cloned genomic fragments of approximately 1-2 kb from six mammalian intestinal Bifidobacterium spp. including B. adolescentis, B. animalis, B. bifidum, B. catenulatum, B. longum and B. pseudolongum. This Bifidobacterium Mix
Microarray analysis of p-anisaldehyde-induced transcriptome of Saccharomyces cerevisiae.
Yu, Lu; Guo, Na; Yang, Yi; Wu, Xiuping; Meng, Rizeng; Fan, Junwen; Ge, Fa; Wang, Xuelin; Liu, Jingbo; Deng, Xuming
2010-03-01
p-Anisaldehyde (4-methoxybenzaldehyde), an extract from Pimpinella anisum L. seeds, is a potential novel preservative. To reveal the possible action mechanism of p-anisaldehyde against microorganisms, yeast-based commercial oligonucleotide microarrays were used to analyze the genome-wide transcriptional changes in response to p-anisaldehyde. Quantitative real-time RT-PCR was performed for selected genes to verify the microarray results. We interpreted our microarray data with the clustering tool, T-profiler. Analysis of microarray data revealed that p-anisaldehyde induced the expression of genes related to sulphur assimilation, aromatic aldehydes metabolism, and secondary metabolism, which demonstrated that the addition of p-anisaldehyde may influence the normal metabolism of aromatic aldehydes. This genome-wide transcriptomics approach revealed first insights into the response of Saccharomyces cerevisiae (S. cerevisiae) to p-anisaldehyde challenge.
3D Biomaterial Microarrays for Regenerative Medicine
Gaharwar, Akhilesh K.; Arpanaei, Ayyoob; Andresen, Thomas Lars;
2015-01-01
Three dimensional (3D) biomaterial microarrays hold enormous promise for regenerative medicine because of their ability to accelerate the design and fabrication of biomimetic materials. Such tissue-like biomaterials can provide an appropriate microenvironment for stimulating and controlling stem...
Quality Visualization of Microarray Datasets Using Circos
Martin Koch
2012-08-01
Full Text Available Quality control and normalization is considered the most important step in the analysis of microarray data. At present there are various methods available for quality assessments of microarray datasets. However there seems to be no standard visualization routine, which also depicts individual microarray quality. Here we present a convenient method for visualizing the results of standard quality control tests using Circos plots. In these plots various quality measurements are drawn in a circular fashion, thus allowing for visualization of the quality and all outliers of each distinct array within a microarray dataset. The proposed method is intended for use with the Affymetrix Human Genome platform (i.e., GPL 96, GPL570 and GPL571. Circos quality measurement plots are a convenient way for the initial quality estimate of Affymetrix datasets that are stored in publicly available databases.
Bernie J Daigle
2010-03-01
Full Text Available Although they have become a widely used experimental technique for identifying differentially expressed (DE genes, DNA microarrays are notorious for generating noisy data. A common strategy for mitigating the effects of noise is to perform many experimental replicates. This approach is often costly and sometimes impossible given limited resources; thus, analytical methods are needed which increase accuracy at no additional cost. One inexpensive source of microarray replicates comes from prior work: to date, data from hundreds of thousands of microarray experiments are in the public domain. Although these data assay a wide range of conditions, they cannot be used directly to inform any particular experiment and are thus ignored by most DE gene methods. We present the SVD Augmented Gene expression Analysis Tool (SAGAT, a mathematically principled, data-driven approach for identifying DE genes. SAGAT increases the power of a microarray experiment by using observed coexpression relationships from publicly available microarray datasets to reduce uncertainty in individual genes' expression measurements. We tested the method on three well-replicated human microarray datasets and demonstrate that use of SAGAT increased effective sample sizes by as many as 2.72 arrays. We applied SAGAT to unpublished data from a microarray study investigating transcriptional responses to insulin resistance, resulting in a 50% increase in the number of significant genes detected. We evaluated 11 (58% of these genes experimentally using qPCR, confirming the directions of expression change for all 11 and statistical significance for three. Use of SAGAT revealed coherent biological changes in three pathways: inflammation, differentiation, and fatty acid synthesis, furthering our molecular understanding of a type 2 diabetes risk factor. We envision SAGAT as a means to maximize the potential for biological discovery from subtle transcriptional responses, and we provide it as a
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Diophantine approximations on fractals
Einsiedler, Manfred; Shapira, Uri
2009-01-01
We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.
Monotone Boolean approximation
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.
PATMA: parser of archival tissue microarray
Lukasz Roszkowiak
2016-12-01
Full Text Available Tissue microarrays are commonly used in modern pathology for cancer tissue evaluation, as it is a very potent technique. Tissue microarray slides are often scanned to perform computer-aided histopathological analysis of the tissue cores. For processing the image, splitting the whole virtual slide into images of individual cores is required. The only way to distinguish cores corresponding to specimens in the tissue microarray is through their arrangement. Unfortunately, distinguishing the correct order of cores is not a trivial task as they are not labelled directly on the slide. The main aim of this study was to create a procedure capable of automatically finding and extracting cores from archival images of the tissue microarrays. This software supports the work of scientists who want to perform further image processing on single cores. The proposed method is an efficient and fast procedure, working in fully automatic or semi-automatic mode. A total of 89% of punches were correctly extracted with automatic selection. With an addition of manual correction, it is possible to fully prepare the whole slide image for extraction in 2 min per tissue microarray. The proposed technique requires minimum skill and time to parse big array of cores from tissue microarray whole slide image into individual core images.
PATMA: parser of archival tissue microarray.
Roszkowiak, Lukasz; Lopez, Carlos
2016-01-01
Tissue microarrays are commonly used in modern pathology for cancer tissue evaluation, as it is a very potent technique. Tissue microarray slides are often scanned to perform computer-aided histopathological analysis of the tissue cores. For processing the image, splitting the whole virtual slide into images of individual cores is required. The only way to distinguish cores corresponding to specimens in the tissue microarray is through their arrangement. Unfortunately, distinguishing the correct order of cores is not a trivial task as they are not labelled directly on the slide. The main aim of this study was to create a procedure capable of automatically finding and extracting cores from archival images of the tissue microarrays. This software supports the work of scientists who want to perform further image processing on single cores. The proposed method is an efficient and fast procedure, working in fully automatic or semi-automatic mode. A total of 89% of punches were correctly extracted with automatic selection. With an addition of manual correction, it is possible to fully prepare the whole slide image for extraction in 2 min per tissue microarray. The proposed technique requires minimum skill and time to parse big array of cores from tissue microarray whole slide image into individual core images.
A Glance at DNA Microarray Technology and Applications
Amir-Ata Saei
2011-08-01
Full Text Available Introduction: Because of huge impacts of “OMICS” technologies in life sciences, many researchers aim to implement such high throughput approach to address cellular and/or molecular functions in response to any influential intervention in genomics, proteomics, or metabolomics levels. However, in many cases, use of such technologies often encounters some cybernetic difficulties in terms of knowledge extraction from a bunch of data using related softwares. In fact, there is little guidance upon data mining for novices. The main goal of this article is to provide a brief review on different steps of microarray data handling and mining for novices and at last to introduce different PC and/or web-based softwares that can be used in preprocessing and/or data mining of microarray data. Methods: To pursue such aim, recently published papers and microarray softwares were reviewed. Results: It was found that defining the true place of the genes in cell networks is the main phase in our understanding of programming and functioning of living cells. This can be obtained with global/selected gene expression profiling. Conclusion: Studying the regulation patterns of genes in groups, using clustering and classification methods helps us understand different pathways in the cell, their functions, regulations and the way one component in the system affects the other one. These networks can act as starting points for data mining and hypothesis generation, helping us reverse engineer.
Fecal source tracking in water using a mitochondrial DNA microarray.
Vuong, Nguyet-Minh; Villemur, Richard; Payment, Pierre; Brousseau, Roland; Topp, Edward; Masson, Luke
2013-01-01
A mitochondrial-based microarray (mitoArray) was developed for rapid identification of the presence of 28 animals and one family (cervidae) potentially implicated in fecal pollution in mixed activity watersheds. Oligonucleotide probes for genus or subfamily-level identification were targeted within the 12S rRNA - Val tRNA - 16S rRNA region in the mitochondrial genome. This region, called MI-50, was selected based on three criteria: 1) the ability to be amplified by universal primers 2) these universal primer sequences are present in most commercial and domestic animals of interest in source tracking, and 3) that sufficient sequence variation exists within this region to meet the minimal requirements for microarray probe discrimination. To quantify the overall level of mitochondrial DNA (mtDNA) in samples, a quantitative-PCR (Q-PCR) universal primer pair was also developed. Probe validation was performed using DNA extracted from animal tissues and, for many cases, animal-specific fecal samples. To reduce the amplification of potentially interfering fish mtDNA sequences during the MI-50 enrichment step, a clamping PCR method was designed using a fish-specific peptide nucleic acid. DNA extracted from 19 water samples were subjected to both array and independent PCR analyses. Our results confirm that the mitochondrial microarray approach method could accurately detect the dominant animals present in water samples emphasizing the potential for this methodology in the parallel scanning of a large variety of animals normally monitored in fecal source tracking.
Coupled Two-Way Clustering Analysis of Gene Microarray Data
Getz, G; Domany, E
2000-01-01
We present a novel coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task: we present an algorithm, based on iterative clustering, which performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.
Coupled two-way clustering analysis of gene microarray data
Getz, Gad; Levine, Erel; Domany, Eytan
2000-10-01
We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.
DNA microarray technique for detecting food-borne pathogens
Xing GAO
2012-08-01
Full Text Available Objective To study the application of DNA microarray technique for screening and identifying multiple food-borne pathogens. Methods The oligonucleotide probes were designed by Clustal X and Oligo 6.0 at the conserved regions of specific genes of multiple food-borne pathogens, and then were validated by bioinformatic analyses. The 5' end of each probe was modified by amino-group and 10 Poly-T, and the optimized probes were synthesized and spotted on aldehyde-coated slides. The bacteria DNA template incubated with Klenow enzyme was amplified by arbitrarily primed PCR, and PCR products incorporated into Aminoallyl-dUTP were coupled with fluorescent dye. After hybridization of the purified PCR products with DNA microarray, the hybridization image and fluorescence intensity analysis was acquired by ScanArray and GenePix Pro 5.1 software. A series of detection conditions such as arbitrarily primed PCR and microarray hybridization were optimized. The specificity of this approach was evaluated by 16 different bacteria DNA, and the sensitivity and reproducibility were verified by 4 food-borne pathogens DNA. The samples of multiple bacteria DNA and simulated water samples of Shigella dysenteriae were detected. Results Nine different food-borne bacteria were successfully discriminated under the same condition. The sensitivity of genomic DNA was 102 －103pg/ μl, and the coefficient of variation (CV of the reproducibility of assay was less than 15%. The corresponding specific hybridization maps of the multiple bacteria DNA samples were obtained, and the detection limit of simulated water sample of Shigella dysenteriae was 3.54×105cfu/ml. Conclusions The DNA microarray detection system based on arbitrarily primed PCR can be employed for effective detection of multiple food-borne pathogens, and this assay may offer a new method for high-throughput platform for detecting bacteria.
Segmentation and intensity estimation for microarray images with saturated pixels
Yang Yan
2011-11-01
holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.
Approximation properties of haplotype tagging
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Norton, Andrew H.
1991-01-01
Local spline approximants offer a means for constructing finite difference formulae for numerical solution of PDEs. These formulae seem particularly well suited to situations in which the use of conventional formulae leads to non-linear computational instability of the time integration. This is explained in terms of frequency responses of the FDF.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
Approximation by Cylinder Surfaces
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Photopatterning of Hydrogel Microarrays in Closed Microchips.
Gumuscu, Burcu; Bomer, Johan G; van den Berg, Albert; Eijkel, Jan C T
2015-12-14
To date, optical lithography has been extensively used for in situ patterning of hydrogel structures in a scale range from hundreds of microns to a few millimeters. The two main limitations which prevent smaller feature sizes of hydrogel structures are (1) the upper glass layer of a microchip maintains a large spacing (typically 525 μm) between the photomask and hydrogel precursor, leading to diffraction of UV light at the edges of mask patterns, (2) diffusion of free radicals and monomers results in irregular polymerization near the illumination interface. In this work, we present a simple approach to enable the use of optical lithography to fabricate hydrogel arrays with a minimum feature size of 4 μm inside closed microchips. To achieve this, we combined two different techniques. First, the upper glass layer of the microchip was thinned by mechanical polishing to reduce the spacing between the photomask and hydrogel precursor, and thereby the diffraction of UV light at the edges of mask patterns. The polishing process reduces the upper layer thickness from ∼525 to ∼100 μm, and the mean surface roughness from 20 to 3 nm. Second, we developed an intermittent illumination technique consisting of short illumination periods followed by relatively longer dark periods, which decrease the diffusion of monomers. Combination of these two methods allows for fabrication of 0.4 × 10(6) sub-10 μm sized hydrogel patterns over large areas (cm(2)) with high reproducibility (∼98.5% patterning success). The patterning method is tested with two different types of photopolymerizing hydrogels: polyacrylamide and polyethylene glycol diacrylate. This method enables in situ fabrication of well-defined hydrogel patterns and presents a simple approach to fabricate 3-D hydrogel matrices for biomolecule separation, biosensing, tissue engineering, and immobilized protein microarray applications.
Kastrin, Andrej
2010-01-01
Class prediction is an important application of microarray gene expression data analysis. The high-dimensionality of microarray data, where number of genes (variables) is very large compared to the number of samples (obser- vations), makes the application of many prediction techniques (e.g., logistic regression, discriminant analysis) difficult. An efficient way to solve this prob- lem is by using dimension reduction statistical techniques. Increasingly used in psychology-related applications, Rasch model (RM) provides an appealing framework for handling high-dimensional microarray data. In this paper, we study the potential of RM-based modeling in dimensionality reduction with binarized microarray gene expression data and investigate its prediction ac- curacy in the context of class prediction using linear discriminant analysis. Two different publicly available microarray data sets are used to illustrate a general framework of the approach. Performance of the proposed method is assessed by re-randomization s...
A Method of Microarray Data Storage Using Array Data Type
Tsoi, Lam C.; Zheng, W Jim
2007-01-01
A well-designed microarray database can provide valuable information on gene expression levels. However, designing an efficient microarray database with minimum space usage is not an easy task since designers need to integrate the microarray data with the information of genes, probe annotation, and the descriptions of each microarray experiment. Developing better methods to store microarray data can greatly improve the efficiency and usefulness of such data. A new schema is proposed to store ...
Hydrogen Beyond the Classic Approximation
Scivetti, I
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Chromosomal microarray versus karyotyping for prenatal diagnosis.
Wapner, Ronald J; Martin, Christa Lese; Levy, Brynn; Ballif, Blake C; Eng, Christine M; Zachary, Julia M; Savage, Melissa; Platt, Lawrence D; Saltzman, Daniel; Grobman, William A; Klugman, Susan; Scholl, Thomas; Simpson, Joe Leigh; McCall, Kimberly; Aggarwal, Vimla S; Bunke, Brian; Nahum, Odelia; Patel, Ankita; Lamb, Allen N; Thom, Elizabeth A; Beaudet, Arthur L; Ledbetter, David H; Shaffer, Lisa G; Jackson, Laird
2012-12-06
Chromosomal microarray analysis has emerged as a primary diagnostic tool for the evaluation of developmental delay and structural malformations in children. We aimed to evaluate the accuracy, efficacy, and incremental yield of chromosomal microarray analysis as compared with karyotyping for routine prenatal diagnosis. Samples from women undergoing prenatal diagnosis at 29 centers were sent to a central karyotyping laboratory. Each sample was split in two; standard karyotyping was performed on one portion and the other was sent to one of four laboratories for chromosomal microarray. We enrolled a total of 4406 women. Indications for prenatal diagnosis were advanced maternal age (46.6%), abnormal result on Down's syndrome screening (18.8%), structural anomalies on ultrasonography (25.2%), and other indications (9.4%). In 4340 (98.8%) of the fetal samples, microarray analysis was successful; 87.9% of samples could be used without tissue culture. Microarray analysis of the 4282 nonmosaic samples identified all the aneuploidies and unbalanced rearrangements identified on karyotyping but did not identify balanced translocations and fetal triploidy. In samples with a normal karyotype, microarray analysis revealed clinically relevant deletions or duplications in 6.0% with a structural anomaly and in 1.7% of those whose indications were advanced maternal age or positive screening results. In the context of prenatal diagnostic testing, chromosomal microarray analysis identified additional, clinically significant cytogenetic information as compared with karyotyping and was equally efficacious in identifying aneuploidies and unbalanced rearrangements but did not identify balanced translocations and triploidies. (Funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and others; ClinicalTrials.gov number, NCT01279733.).
John S. Furey
2005-04-01
Full Text Available In an effort towards adapting new and defensible methods for assessing and managing the risk posed by microbial pollution, we evaluated the utility of oligonucleotide microarrays for bacterial source tracking (BST of environmental Enterococcus sp. isolates derived from various host sources. Current bacterial source tracking approaches rely on various phenotypic and genotypic methods to identify sources of bacterial contamination resulting from point or non-point pollution. For this study Enterococcus sp. isolates originating from deer, bovine, gull, and human sources were examined using microarrays. Isolates were subjected to Box PCR amplification and the resulting amplification products labeled with Cy5. Fluorescent-labeled templates were hybridized to in-house constructed nonamer oligonucleotide microarrays consisting of 198 probes. Microarray hybridization profiles were obtained using the ArrayPro image analysis software. Principal Components Analysis (PCA and Hierarchical Cluster Analysis (HCA were compared for their ability to visually cluster microarray hybridization profiles based on the environmental source from which the Enterococcus sp. isolates originated. The PCA was visually superior at separating origin-specific clusters, even for as few as 3 factors. A Soft Independent Modeling (SIM classification confirmed the PCA, resulting in zero misclassifications using 5 factors for each class. The implication of these results for the application of random oligonucleotide microarrays for BST is that, given the reproducibility issues, factor-based variable selection such as in PCA and SIM greatly outperforms dendrogram-based similarity measures such as in HCA and K-Nearest Neighbor KNN.
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Li, Zhenhua; Zhao, Bin; Wang, Dongfang; Wen, Yanli; Liu, Gang; Dong, Haoqing; Song, Shiping; Fan, Chunhai
2014-10-22
Microarrays of biomolecules have greatly promoted the development of the fields of genomics, proteomics, and clinical assays because of their remarkably parallel and high-throughput assay capability. Immobilization strategies for biomolecules on a solid support surface play a crucial role in the fabrication of high-performance biological microarrays. In this study, rationally designed DNA tetrahedra carrying three amino groups and one single-stranded DNA extension were synthesized by the self-assembly of four oligonucleotides, followed by high-performance liquid chromatography purification. We fabricated DNA tetrahedron-based microarrays by covalently coupling the DNA tetrahedron onto glass substrates. After their biorecognition capability was evaluated, DNA tetrahedron microarrays were utilized for the analysis of different types of bioactive molecules. The gap hybridization strategy, the sandwich configuration, and the engineering aptamer strategy were employed for the assay of miRNA biomarkers, protein cancer biomarkers, and small molecules, respectively. The arrays showed good capability to anchor capture biomolecules for improving biorecognition. Addressable and high-throughput analysis with improved sensitivity and specificity had been achieved. The limit of detection for let-7a miRNA, prostate specific antigen, and cocaine were 10 fM, 40 pg/mL, and 100 nM, respectively. More importantly, we demonstrated that the microarray platform worked well with clinical serum samples and showed good relativity with conventional chemical luminescent immunoassay. We have developed a novel approach for the fabrication of DNA tetrahedron-based microarrays and a universal DNA tetrahedron-based microarray platform for the detection of different types of bioactive molecules. The microarray platform shows great potential for clinical diagnosis.
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
Topics in Metric Approximation
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Low Rank Approximation in $G_0W_0$ Approximation
Shao, Meiyue; Yang, Chao; Liu, Fang; da Jornada, Felipe H; Deslippe, Jack; Louie, Steven G
2016-01-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cos...
On Gakerkin approximations for the quasigeostrophic equations
Rocha, Cesar B; Grooms, Ian
2015-01-01
We study the representation of approximate solutions of the three-dimensional quasigeostrophic (QG) equations using Galerkin series with standard vertical modes. In particular, we show that standard modes are compatible with nonzero buoyancy at the surfaces and can be used to solve the Eady problem. We extend two existing Galerkin approaches (A and B) and develop a new Galerkin approximation (C). Approximation A, due to Flierl (1978), represents the streamfunction as a truncated Galerkin series and defines the potential vorticity (PV) that satisfies the inversion problem exactly. Approximation B, due to Tulloch and Smith (2009b), represents the PV as a truncated Galerkin series and calculates the streamfunction that satisfies the inversion problem exactly. Approximation C, the true Galerkin approximation for the QG equations, represents both streamfunction and PV as truncated Galerkin series, but does not satisfy the inversion equation exactly. The three approximations are fundamentally different unless the b...
A 7872 cDNA microarray and its use in bovine functional genomics.
Everts, Robin E; Band, Mark R; Liu, Z Lewis; Kumar, Charu G; Liu, Lei; Loor, Juan J; Oliveira, Rosane; Lewin, Harris A
2005-05-15
The strategy used to create and annotate a 7872 cDNA microarray from cattle placenta and spleen cDNA sequences is described. This microarray contains approximately 6300 unique genes, as determined by BLASTN and TBLASTX similarity search against the human and mouse UniGene and draft human genome sequence databases (build 34). Sequences on the array were annotated with gene ontology (GO) terms, thereby facilitating data analysis and interpretation. A total of 3244 genes were annotated with GO terms. The array is rich in sequences encoding transcription factors, signal transducers and cell cycle regulators. Current research being conducted with this array is described, and an overview of planned improvements in our microarray platform for cattle functional genomics is presented.
Computing Functions by Approximating the Input
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
[Modified method of constructing tissue microarray which contains keloid and normal skin].
Zhang, Zhenyu; Chen, Junjie; Cen, Ying; Zhao, Sha; Liao, Dianying; Gong, Jing
2010-08-01
To seek for a method of constructing the tissue microarray which contains keloid, skin around keloid, and normal skin. The specimens were gained from patients of voluntary donation between March and May 2009, including the tissues of keloid (27 cases), skin around keloid (13 cases), and normal skin (27 cases). The specimens were imbedded by paraffin as donor blocks. The traditional method of constructing the tissue microarray and section were modified according to the histological characteristics of the keloid and skin tissue and the experimental requirement. The tissue cores were drilled from donor blocks and attached securely on the adhesive platform which was prepared. The adhesive platform with tissue cores in situ was placed into an imbedding mold, which then was preheated briefly. Paraffin at approximately 70 degrees C was injected to fill the mold and then cooled to room temperature. Then HE staining, immunohistochemistry staining were performed and the results were observed by microscope. The constructed tissue microarray block contained 67 cores as designed and displayed smooth surface with no crack. All the cores distributed regularly, had no disintegration or manifest shift. HE staining of tissue microarray section showed that all cores had equal thickness, distinct layer, manifest contradistinction, well-defined edge, and consistent with original pathological diagnosis. Immunohistochemistry staining results demonstrated that all cores contained enough tissue dose to apply group comparison. However, in tissue microarray which was made as traditional method, many cores missed and a few cores shifted obviously. Applying modified method can successfully construct tissue microarray which is composed of keloid, skin around keloid, and normal skin. This tissue microarray will become an effective tool of researching the pathogenesis of keloid.
Review: DNA Microarray Technology and Drug Development
Sushma Drabu
2010-01-01
Full Text Available
An overview on Approximate Bayesian computation*
Baragatti Meïli
2014-01-01
Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.
Approximate Deconvolution Reduced Order Modeling
Xie, Xuping; Wang, Zhu; Iliescu, Traian
2015-01-01
This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})
Accurate detection of carcinoma cells by use of a cell microarray chip.
Shohei Yamamura
Full Text Available BACKGROUND: Accurate detection and analysis of circulating tumor cells plays an important role in the diagnosis and treatment of metastatic cancer treatment. METHODS AND FINDINGS: A cell microarray chip was used to detect spiked carcinoma cells among leukocytes. The chip, with 20,944 microchambers (105 µm width and 50 µm depth, was made from polystyrene; and the formation of monolayers of leukocytes in the microchambers was observed. Cultured human T lymphoblastoid leukemia (CCRF-CEM cells were used to examine the potential of the cell microarray chip for the detection of spiked carcinoma cells. A T lymphoblastoid leukemia suspension was dispersed on the chip surface, followed by 15 min standing to allow the leukocytes to settle down into the microchambers. Approximately 29 leukocytes were found in each microchamber when about 600,000 leukocytes in total were dispersed onto a cell microarray chip. Similarly, when leukocytes isolated from human whole blood were used, approximately 89 leukocytes entered each microchamber when about 1,800,000 leukocytes in total were placed onto the cell microarray chip. After washing the chip surface, PE-labeled anti-cytokeratin monoclonal antibody and APC-labeled anti-CD326 (EpCAM monoclonal antibody solution were dispersed onto the chip surface and allowed to react for 15 min; and then a microarray scanner was employed to detect any fluorescence-positive cells within 20 min. In the experiments using spiked carcinoma cells (NCI-H1650, 0.01 to 0.0001%, accurate detection of carcinoma cells was achieved with PE-labeled anti-cytokeratin monoclonal antibody. Furthermore, verification of carcinoma cells in the microchambers was performed by double staining with the above monoclonal antibodies. CONCLUSION: The potential application of the cell microarray chip for the detection of CTCs was shown, thus demonstrating accurate detection by double staining for cytokeratin and EpCAM at the single carcinoma cell level.
Pipeline for macro- and microarray analyses
R. Vicentini
2007-05-01
Full Text Available The pipeline for macro- and microarray analyses (PMmA is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps. It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.
Development of a genotyping microarray for Usher syndrome
Cremers, Frans P M; Kimberling, William J; Külm, Maigi; de Brouwer, Arjan P; van Wijk, Erwin; te Brinke, Heleen; Cremers, Cor W R J; Hoefsloot, Lies H; Banfi, Sandro; Simonelli, Francesca; Fleischhauer, Johannes C; Berger, Wolfgang; Kelley, Phil M; Haralambous, Elene; Bitner‐Glindzicz, Maria; Webster, Andrew R; Saihan, Zubin; De Baere, Elfride; Leroy, Bart P; Silvestri, Giuliana; McKay, Gareth J; Koenekoop, Robert K; Millan, Jose M; Rosenberg, Thomas; Joensuu, Tarja; Sankila, Eeva‐Marja; Weil, Dominique; Weston, Mike D; Wissinger, Bernd; Kremer, Hannie
2007-01-01
Background Usher syndrome, a combination of retinitis pigmentosa (RP) and sensorineural hearing loss with or without vestibular dysfunction, displays a high degree of clinical and genetic heterogeneity. Three clinical subtypes can be distinguished, based on the age of onset and severity of the hearing impairment, and the presence or absence of vestibular abnormalities. Thus far, eight genes have been implicated in the syndrome, together comprising 347 protein‐coding exons. Methods: To improve DNA diagnostics for patients with Usher syndrome, we developed a genotyping microarray based on the arrayed primer extension (APEX) method. Allele‐specific oligonucleotides corresponding to all 298 Usher syndrome‐associated sequence variants known to date, 76 of which are novel, were arrayed. Results Approximately half of these variants were validated using original patient DNAs, which yielded an accuracy of >98%. The efficiency of the Usher genotyping microarray was tested using DNAs from 370 unrelated European and American patients with Usher syndrome. Sequence variants were identified in 64/140 (46%) patients with Usher syndrome type I, 45/189 (24%) patients with Usher syndrome type II, 6/21 (29%) patients with Usher syndrome type III and 6/20 (30%) patients with atypical Usher syndrome. The chip also identified two novel sequence variants, c.400C>T (p.R134X) in PCDH15 and c.1606T>C (p.C536S) in USH2A. Conclusion The Usher genotyping microarray is a versatile and affordable screening tool for Usher syndrome. Its efficiency will improve with the addition of novel sequence variants with minimal extra costs, making it a very useful first‐pass screening tool. PMID:16963483
Chalasani, P.; Saias, I. [Los Alamos National Lab., NM (United States); Jha, S. [Carnegie Mellon Univ., Pittsburgh, PA (United States)
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
SNP Microarray in FISH Negative Clinically Suspected 22q11.2 Microdeletion Syndrome
Ashutosh Halder
2016-01-01
Full Text Available The present study evaluated the role of SNP microarray in 101 cases of clinically suspected FISH negative (noninformative/normal 22q11.2 microdeletion syndrome. SNP microarray was carried out using 300 K HumanCytoSNP-12 BeadChip array or CytoScan 750 K array. SNP microarray identified 8 cases of 22q11.2 microdeletions and/or microduplications in addition to cases of chromosomal abnormalities and other pathogenic/likely pathogenic CNVs. Clinically suspected specific deletions (22q11.2 were detectable in approximately 8% of cases by SNP microarray, mostly from FISH noninformative cases. This study also identified several LOH/AOH loci with known and well-defined UPD (uniparental disomy disorders. In conclusion, this study suggests more strict clinical criteria for FISH analysis. However, if clinical criteria are few or doubtful, in particular newborn/neonate in intensive care, SNP microarray should be the first screening test to be ordered. FISH is ideal test for detecting mosaicism, screening family members, and prenatal diagnosis in proven families.
Carter, Mark G.; Hamatani, Toshio; Sharov, Alexei A.; Carmack, Condie E.; Qian, Yong; Aiba, Kazuhiro; Ko, Naomi T.; Dudekula, Dawood B.; Brzoska, Pius M.; Hwang, S. Stuart; Ko, Minoru S.H.
2003-01-01
Applications of microarray technologies to mouse embryology/genetics have been limited, due to the nonavailability of microarrays containing large numbers of embryonic genes and the gap between microgram quantities of RNA required by typical microarray methods and the miniscule amounts of tissue available to researchers. To overcome these problems, we have developed a microarray platform containing in situ-synthesized 60-mer oligonucleotide probes representing approximately 22,000 unique mouse transcripts, assembled primarily from sequences of stem cell and embryo cDNA libraries. We have optimized RNA labeling protocols and experimental designs to use as little as 2 ng total RNA reliably and reproducibly. At least 98% of the probes contained in the microarray correspond to clones in our publicly available collections, making cDNAs readily available for further experimentation on genes of interest. These characteristics, combined with the ability to profile very small samples, make this system a resource for stem cell and embryogenomics research. [Supplemental material is available online at www.genome.org and at the NIA Mouse cDNA Project Web site, http://lgsun.grc.nia.nih.gov/cDNA/cDNA.html.] PMID:12727912
An event-specific DNA microarray to identify genetically modified organisms in processed foods.
Kim, Jae-Hwan; Kim, Su-Youn; Lee, Hyungjae; Kim, Young-Rok; Kim, Hae-Yeong
2010-05-26
We developed an event-specific DNA microarray system to identify 19 genetically modified organisms (GMOs), including two GM soybeans (GTS-40-3-2 and A2704-12), thirteen GM maizes (Bt176, Bt11, MON810, MON863, NK603, GA21, T25, TC1507, Bt10, DAS59122-7, TC6275, MIR604, and LY038), three GM canolas (GT73, MS8xRF3, and T45), and one GM cotton (LLcotton25). The microarray included 27 oligonucleotide probes optimized to identify endogenous reference targets, event-specific targets, screening targets (35S promoter and nos terminator), and an internal target (18S rRNA gene). Thirty-seven maize-containing food products purchased from South Korean and US markets were tested for the presence of GM maize using this microarray system. Thirteen GM maize events were simultaneously detected using multiplex PCR coupled with microarray on a single chip, at a limit of detection of approximately 0.5%. Using the system described here, we detected GM maize in 11 of the 37 food samples tested. These results suggest that an event-specific DNA microarray system can reliably detect GMOs in processed foods.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations.
Péter L Erdős
Full Text Available In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007, for regular directed graphs (by Greenhill, 2011 and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013. Several heuristics on counting the number of possible realizations exist (via sampling processes, and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS for counting of all realizations.
Posttranslational Modification Assays on Functional Protein Microarrays.
Neiswinger, Johnathan; Uzoma, Ijeoma; Cox, Eric; Rho, HeeSool; Jeong, Jun Seop; Zhu, Heng
2016-10-03
Protein microarray technology provides a straightforward yet powerful strategy for identifying substrates of posttranslational modifications (PTMs) and studying the specificity of the enzymes that catalyze these reactions. Protein microarray assays can be designed for individual enzymes or a mixture to establish connections between enzymes and substrates. Assays for four well-known PTMs-phosphorylation, acetylation, ubiquitylation, and SUMOylation-have been developed and are described here for use on functional protein microarrays. Phosphorylation and acetylation require a single enzyme and are easily adapted for use on an array. The ubiquitylation and SUMOylation cascades are very similar, and the combination of the E1, E2, and E3 enzymes plus ubiquitin or SUMO protein and ATP is sufficient for in vitro modification of many substrates.
Approximate Bayesian computation.
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
魏珂; 任建华; 孟祥福
2012-01-01
Based on XML twig query fragments relaxation, this paper proposed an approximate querying and results ranking approach to achieve the approximate query results against XML documents: our method gathers the query history to speculate the user's preferences, which is used to calculate the importance for each query fragment of the twig query,and relax the original query according to the sequence of the fragments' importance; based on the number of query fragments we adopt different relax way:if the number>2,relax the original query according to the granularity of the fragment; if the number<2, relax the original query according to the granularity of query node, and adopt a different way to relax the numerical query and non-numerical query,and then obtain the most relevant query results. Finally, the relevant query results are ranked based on their satisfaction degree to the original query and the user preferences. Our experiment shows that the approximate querying and the results ranking approach can efficiently meet the user's needs and user's preferences,has the high recall and precision.%提出了一种基于XML小枝查询片段松弛的近似查询与结果排序方法来实现用户在XML文档中的近似查询:通过收集用户的查询历史来推测用户偏好,并以此计算原始小枝查询分解得到的查询片段的重要程度,然后按照重要程度的排序进行查询松弛；在松弛方法中,根据查询片段数目的不同采用不同的松弛方法,若片段数目较多则以查询片段为粒度对其松弛,较少则以查询结点为粒度对数值查询与非数值查询采用不同的方法进行松弛,得到最为相关的近似查询结果；最后接近似查询结果对原始查询和用户偏好的满足程度进行排序并输出.实验证明,该近似查询方法能够较好地满足用户的需求和偏好,具有较高的查全率和准确率.
Canonical Sets of Best L1-Approximation
Dimiter Dryanov
2012-01-01
Full Text Available In mathematics, the term approximation usually means either interpolation on a point set or approximation with respect to a given distance. There is a concept, which joins the two approaches together, and this is the concept of characterization of the best approximants via interpolation. It turns out that for some large classes of functions the best approximants with respect to a certain distance can be constructed by interpolation on a point set that does not depend on the choice of the function to be approximated. Such point sets are called canonical sets of best approximation. The present paper summarizes results on canonical sets of best L1-approximation with emphasis on multivariate interpolation and best L1-approximation by blending functions. The best L1-approximants are characterized as transfinite interpolants on canonical sets. The notion of a Haar-Chebyshev system in the multivariate case is discussed also. In this context, it is shown that some multivariate interpolation spaces share properties of univariate Haar-Chebyshev systems. We study also the problem of best one-sided multivariate L1-approximation by sums of univariate functions. Explicit constructions of best one-sided L1-approximants give rise to well-known and new inequalities.
Hybridization and Selective Release of DNA Microarrays
Beer, N R; Baker, B; Piggott, T; Maberry, S; Hara, C M; DeOtte, J; Benett, W; Mukerjee, E; Dzenitis, J; Wheeler, E K
2011-11-29
DNA microarrays contain sequence specific probes arrayed in distinct spots numbering from 10,000 to over 1,000,000, depending on the platform. This tremendous degree of multiplexing gives microarrays great potential for environmental background sampling, broad-spectrum clinical monitoring, and continuous biological threat detection. In practice, their use in these applications is not common due to limited information content, long processing times, and high cost. The work focused on characterizing the phenomena of microarray hybridization and selective release that will allow these limitations to be addressed. This will revolutionize the ways that microarrays can be used for LLNL's Global Security missions. The goals of this project were two-fold: automated faster hybridizations and selective release of hybridized features. The first study area involves hybridization kinetics and mass-transfer effects. the standard hybridization protocol uses an overnight incubation to achieve the best possible signal for any sample type, as well as for convenience in manual processing. There is potential to significantly shorten this time based on better understanding and control of the rate-limiting processes and knowledge of the progress of the hybridization. In the hybridization work, a custom microarray flow cell was used to manipulate the chemical and thermal environment of the array and autonomously image the changes over time during hybridization. The second study area is selective release. Microarrays easily generate hybridization patterns and signatures, but there is still an unmet need for methodologies enabling rapid and selective analysis of these patterns and signatures. Detailed analysis of individual spots by subsequent sequencing could potentially yield significant information for rapidly mutating and emerging (or deliberately engineered) pathogens. In the selective release work, optical energy deposition with coherent light quickly provides the thermal energy
Kennan, Avril; Aherne, Aileen; Palfi, Arpad
2002-01-01
Comparative analysis of the transcriptional profiles of approximately 6000 genes in the retinas of wild-type mice with those carrying a targeted disruption of the rhodopsin gene was undertaken by microarray analysis. This revealed a series of transcripts, of which some were derived from genes known...... is not present in healthy, unrelated individuals of European origin. These data provide strong evidence that mutations within the IMPDH1 gene cause adRP, and validate approaches to mutation detection involving comparative analysis of global transcription profiles in normal and degenerating retinal tissues. Other...
The use of microarrays in microbial ecology
Andersen, G.L.; He, Z.; DeSantis, T.Z.; Brodie, E.L.; Zhou, J.
2009-09-15
Microarrays have proven to be a useful and high-throughput method to provide targeted DNA sequence information for up to many thousands of specific genetic regions in a single test. A microarray consists of multiple DNA oligonucleotide probes that, under high stringency conditions, hybridize only to specific complementary nucleic acid sequences (targets). A fluorescent signal indicates the presence and, in many cases, the abundance of genetic regions of interest. In this chapter we will look at how microarrays are used in microbial ecology, especially with the recent increase in microbial community DNA sequence data. Of particular interest to microbial ecologists, phylogenetic microarrays are used for the analysis of phylotypes in a community and functional gene arrays are used for the analysis of functional genes, and, by inference, phylotypes in environmental samples. A phylogenetic microarray that has been developed by the Andersen laboratory, the PhyloChip, will be discussed as an example of a microarray that targets the known diversity within the 16S rRNA gene to determine microbial community composition. Using multiple, confirmatory probes to increase the confidence of detection and a mismatch probe for every perfect match probe to minimize the effect of cross-hybridization by non-target regions, the PhyloChip is able to simultaneously identify any of thousands of taxa present in an environmental sample. The PhyloChip is shown to reveal greater diversity within a community than rRNA gene sequencing due to the placement of the entire gene product on the microarray compared with the analysis of up to thousands of individual molecules by traditional sequencing methods. A functional gene array that has been developed by the Zhou laboratory, the GeoChip, will be discussed as an example of a microarray that dynamically identifies functional activities of multiple members within a community. The recent version of GeoChip contains more than 24,000 50mer
Prominent feature selection of microarray data
Yihui Liu
2009-01-01
For wavelet transform, a set of orthogonal wavelet basis aims to detect the localized changing features contained in microarray data. In this research, we investigate the performance of the selected wavelet features based on wavelet detail coefficients at the second level and the third level. The genetic algorithm is performed to optimize wavelet detail coefficients to select the best discriminant features. Exper-iments are carried out on four microarray datasets to evaluate the performance of classification. Experimental results prove that wavelet features optimized from detail coefficients efficiently characterize the differences between normal tissues and cancer tissues.
Diagnostic and analytical applications of protein microarrays
Dufva, Hans Martin; Christensen, C.B.V.
2005-01-01
-linked immunosorbent assay, mass spectrometry or high-performance liquid chromatography-based assays. However, for protein and antibody arrays to be successfully introduced into diagnostics, the biochemistry of immunomicroarrays must be better characterized and simplified, they must be validated in a clinical setting...... years. A genome-scale protein microarray has been demonstrated for identifying protein-protein interactions as well as for rapid identification of protein binding to a particular drug. Furthermore, protein microarrays have been shown as an efficient tool in cancer profiling, detection of bacteria...
Microarrays - A Key Technology for Glycobiology
Liu, Yan; Feizi, Ten
Carbohydrate chains of glycoproteins , glycolipids , and proteoglycans can mediate processes of biological and medical importance through their interactions with complementary proteins. The unraveling of these interactions is a priority therefore in biomedical sciences. Carbohydrate microarray technology is a new development at the frontiers of glycomics that has revolutionized the study of carbohydrate-protein interactions and the elucidation of their specificities in endogenous biological processes, immune defense mechanisms, and microbe-host interactions. In this chapter we briefly touch upon the principles of numerous platforms since the introduction of carbohydrate microarrays in 2002, and we highlight platforms that are beyond proof-of-concept, and have provided new biological information.
Development and application of a microarray meter tool to optimize microarray experiments
Rouse Richard JD
2008-07-01
Full Text Available Abstract Background Successful microarray experimentation requires a complex interplay between the slide chemistry, the printing pins, the nucleic acid probes and targets, and the hybridization milieu. Optimization of these parameters and a careful evaluation of emerging slide chemistries are a prerequisite to any large scale array fabrication effort. We have developed a 'microarray meter' tool which assesses the inherent variations associated with microarray measurement prior to embarking on large scale projects. Findings The microarray meter consists of nucleic acid targets (reference and dynamic range control and probe components. Different plate designs containing identical probe material were formulated to accommodate different robotic and pin designs. We examined the variability in probe quality and quantity (as judged by the amount of DNA printed and remaining post-hybridization using three robots equipped with capillary printing pins. Discussion The generation of microarray data with minimal variation requires consistent quality control of the (DNA microarray manufacturing and experimental processes. Spot reproducibility is a measure primarily of the variations associated with printing. The microarray meter assesses array quality by measuring the DNA content for every feature. It provides a post-hybridization analysis of array quality by scoring probe performance using three metrics, a a measure of variability in the signal intensities, b a measure of the signal dynamic range and c a measure of variability of the spot morphologies.
Approximate Qualitative Temporal Reasoning
2001-01-01
Bettini, X. Wang, and S. Jajodia. A general framework for time granularity and its application to temporal reasoning. Annals of Mathematics and Artificial...Temporal representation and reasoning in artificial intelligence: Issues and approaches. Annals of Mathematics and Artificial Intelligence, 28, 2000...Quantum mereotopology. Annals of Mathematics and Artificial Intelli- gence, to appear. [42] A. C. Varzi. On the boundary between mereology and topology
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Approximation by Cylinder Surfaces
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....
KUDRYAVTSEV Pavel Gennadievich
2015-04-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer ethod based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
KUDRYAVTSEV Pavel Gennadievich
2015-06-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
Gene expression studies using microarrays
Burgess, Janette
2001-01-01
1. The rapid progression of the collaborative sequencing programmes that are unravelling the complete genome sequences of many organisms are opening pathways for new approaches to gene analysis. As the sequence data become available, the bottleneck in biological research will shift to understanding
Dai Yilin
2012-06-01
Full Text Available Abstract Background Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. Findings We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Conclusion Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.
Bioinformatics/biostatistics: microarray analysis.
Eichler, Gabriel S
2012-01-01
The quantity and complexity of the molecular-level data generated in both research and clinical settings require the use of sophisticated, powerful computational interpretation techniques. It is for this reason that bioinformatic analysis of complex molecular profiling data has become a fundamental technology in the development of personalized medicine. This chapter provides a high-level overview of the field of bioinformatics and outlines several, classic bioinformatic approaches. The highlighted approaches can be aptly applied to nearly any sort of high-dimensional genomic, proteomic, or metabolomic experiments. Reviewed technologies in this chapter include traditional clustering analysis, the Gene Expression Dynamics Inspector (GEDI), GoMiner (GoMiner), Gene Set Enrichment Analysis (GSEA), and the Learner of Functional Enrichment (LeFE).
Kinship Testing Based on SNPs Using Microarray System
Cho, Sohee; Seo, Hee Jin; Lee, Jihyun; Yu, Hyung Jin; Lee, Soong Deok
2016-01-01
Background Kinship testing using biallelic SNP markers has been demonstrated to be a promising approach as a supplement to standard STR typing, and several systems, such as pyrosequencing and microarray, have been introduced and utilized in real forensic cases. The Affymetrix microarray containing 169 autosomal SNPs developed for forensic application was applied to our practical case for kinship analysis that had remained inconclusive due to partial STR profiles of degraded DNA and possibility of inbreeding within the population. Case Report 169 autosomal SNPs were typed on array with severely degraded DNA of two bone samples, and the kinship compared to genotypes in a reference database of their putative family members. Results Two bone samples remained unidentified through traditional STR typing with partial profiles of 10 or 14 of 16 alleles. Because these samples originated from a geographically isolated population, a cautious approach was required when analyzing and declaring true paternity only based on PI values. In a supplementary SNP typing, 106 and 78 SNPs were obtained, and the match candidates were found in each case with improved PI values than using only STRs and with no discrepant SNPs in comparison. Conclusion Our case showed that the utility of multiple SNPs on array is expected in practical forensic caseworks with an establishment of reference database. PMID:27994531
Kinship Testing Based on SNPs Using Microarray System.
Cho, Sohee; Seo, Hee Jin; Lee, Jihyun; Yu, Hyung Jin; Lee, Soong Deok
2016-11-01
Kinship testing using biallelic SNP markers has been demonstrated to be a promising approach as a supplement to standard STR typing, and several systems, such as pyrosequencing and microarray, have been introduced and utilized in real forensic cases. The Affymetrix microarray containing 169 autosomal SNPs developed for forensic application was applied to our practical case for kinship analysis that had remained inconclusive due to partial STR profiles of degraded DNA and possibility of inbreeding within the population. 169 autosomal SNPs were typed on array with severely degraded DNA of two bone samples, and the kinship compared to genotypes in a reference database of their putative family members. Two bone samples remained unidentified through traditional STR typing with partial profiles of 10 or 14 of 16 alleles. Because these samples originated from a geographically isolated population, a cautious approach was required when analyzing and declaring true paternity only based on PI values. In a supplementary SNP typing, 106 and 78 SNPs were obtained, and the match candidates were found in each case with improved PI values than using only STRs and with no discrepant SNPs in comparison. Our case showed that the utility of multiple SNPs on array is expected in practical forensic caseworks with an establishment of reference database.
Single-species microarrays and comparative transcriptomics.
Frédéric J J Chain
Full Text Available BACKGROUND: Prefabricated expression microarrays are currently available for only a few species but methods have been proposed to extend their application to comparisons between divergent genomes. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that the hybridization intensity of genomic DNA is a poor basis on which to select unbiased probes on Affymetrix expression arrays for studies of comparative transcriptomics, and that doing so produces spurious results. We used the Affymetrix Xenopus laevis microarray to evaluate expression divergence between X. laevis, X. borealis, and their F1 hybrids. When data are analyzed with probes that interrogate only sequences with confirmed identity in both species, we recover results that differ substantially analyses that use genomic DNA hybridizations to select probes. CONCLUSIONS/SIGNIFICANCE: Our findings have implications for the experimental design of comparative expression studies that use single-species microarrays, and for our understanding of divergent expression in hybrid clawed frogs. These findings also highlight important limitations of single-species microarrays for studies of comparative transcriptomics of polyploid species.
Microarray Assisted Gene Discovery in Ulcerative Colitis
Brusgaard, Klaus
), and microarray based expression studies. In IBD the increased production of chemo attractants from the inflamed microenvironment results in recruitment of activated CD4+ T lymphocytes which results in tissue damage. Where Th1 cell-derived cytokines has been reported to be essential mediators in CD with high (IFN...
Pineal function : Impact of microarray analysis
Klein, David C.; Bailey, Michael J.; Carter, David A.; Kim, Jong-so; Shi, Qiong; Ho, Anthony K.; Chik, Constance L.; Gaildrat, Pascaline; Morin, Fabrice; Ganguly, Surajit; Rath, Martin F.; Moller, Morten; Sugden, David; Rangel, Zoila G.; Munson, Peter J.; Weller, Joan L.; Coon, Steven L.
2010-01-01
Microarray analysis has provided a new understanding of pineal function by identifying genes that are highly expressed in this tissue relative to other tissues and also by identifying over 600 genes that are expressed on a 24-h schedule. This effort has highlighted surprising similarity to the retin
Design of a covalently bonded glycosphingolipid microarray
Arigi, Emma; Blixt, Klas Ola; Buschard, Karsten
2012-01-01
-mercaptoethylamine, was also tested. Underivatized or linker-derivatized lyso-GSL were then immobilized on N-hydroxysuccinimide- or epoxide-activated glass microarray slides and probed with carbohydrate binding proteins of known or partially known specificities (i.e., cholera toxin B-chain; peanut agglutinin...
赵荣珍; 于昊; 徐继刚
2012-01-01
A new fault diagnosis approach for rotor system was proposed based on local mean decomposition LMD) approximate entropy and hidden Markov models (HMM). The fine localization feature of LMD and approximate entropy combined with HMM were used to identify quantify the fault type. By using LMD method, the vibration signal of the rotor systems was made as a sum of several components of a product function (PF), in which the instantaneous frequencies should have physical meaning. The approximate entropies of the first three PF components were taken as the eigenvectors of the signal and the eigenvectors were input into HMM classifier to recognize the fault type. Simulation result showed that this method could be effectively used to extract the fault characteristics, and, combined with the dynamic statistical characteristics of HMM, the rotor fault type could be identified intelligently.%提出一种基于局部均值模式分解(local mean decomposition,简称LMD)的近似熵和隐Markov模型(hidden Markov model,简称HMM)的转子系统故障识别新方法.利用LMD良好的局域化特性和近似熵来量化故障特征,再与HMM结合进行故障类型识别.用LMD方法将转子信号分解成若干个瞬时频率具有物理意义的乘积函数(product function)PF分量之和,选取转子信号的前3个PF分量的近似熵值作为信号的特征向量,将构造出的特征向量输入到HMM分类器中进行故障类型识别.仿真表明,该方法能有效地提取故障特征,结合HMM的动态统计特性可智能识别转子故障类型.
A Method of Microarray Data Storage Using Array Data Type
Tsoi, Lam C.; Zheng, W. Jim
2009-01-01
A well-designed microarray database can provide valuable information on gene expression levels. However, designing an efficient microarray database with minimum space usage is not an easy task since designers need to integrate the microarray data with the information of genes, probe annotation, and the descriptions of each microarray experiment. Developing better methods to store microarray data can greatly improve the efficiency and usefulness of such data. A new schema is proposed to store microarray data by using array data type in an object-relational database management system – PostgreSQL. The implemented database can store all the microarray data from the same chip in an array data structure. The variable length array data type in PostgreSQL can store microarray data from same chip. The implementation of our schema can help to increase the data retrieval and space efficiency. PMID:17392028
Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.
Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J
2008-06-18
correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.
Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient
Loraine Ann
2008-06-01
This study shows that SCC is an alternative to the Pearson correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.
Examining microarray slide quality for the EPA using SNL's hyperspectral microarray scanner.
Rohde, Rachel M.; Timlin, Jerilyn Ann
2005-11-01
This report summarizes research performed at Sandia National Laboratories (SNL) in collaboration with the Environmental Protection Agency (EPA) to assess microarray quality on arrays from two platforms of interest to the EPA. Custom microarrays from two novel, commercially produced array platforms were imaged with SNL's unique hyperspectral imaging technology and multivariate data analysis was performed to investigate sources of emission on the arrays. No extraneous sources of emission were evident in any of the array areas scanned. This led to the conclusions that either of these array platforms could produce high quality, reliable microarray data for the EPA toxicology programs. Hyperspectral imaging results are presented and recommendations for microarray analyses using these platforms are detailed within the report.
Facilitating functional annotation of chicken microarray data
Gresham Cathy R
2009-10-01
Full Text Available Abstract Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO. However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and
Design of a covalently bonded glycosphingolipid microarray.
Arigi, Emma; Blixt, Ola; Buschard, Karsten; Clausen, Henrik; Levery, Steven B
2012-01-01
Glycosphingolipids (GSLs) are well known ubiquitous constituents of all eukaryotic cell membranes, yet their normal biological functions are not fully understood. As with other glycoconjugates and saccharides, solid phase display on microarrays potentially provides an effective platform for in vitro study of their functional interactions. However, with few exceptions, the most widely used microarray platforms display only the glycan moiety of GSLs, which not only ignores potential modulating effects of the lipid aglycone, but inherently limits the scope of application, excluding, for example, the major classes of plant and fungal GSLs. In this work, a prototype "universal" GSL-based covalent microarray has been designed, and preliminary evaluation of its potential utility in assaying protein-GSL binding interactions investigated. An essential step in development involved the enzymatic release of the fatty acyl moiety of the ceramide aglycone of selected mammalian GSLs with sphingolipid N-deacylase (SCDase). Derivatization of the free amino group of a typical lyso-GSL, lyso-G(M1), with a prototype linker assembled from succinimidyl-[(N-maleimidopropionamido)-diethyleneglycol] ester and 2-mercaptoethylamine, was also tested. Underivatized or linker-derivatized lyso-GSL were then immobilized on N-hydroxysuccinimide- or epoxide-activated glass microarray slides and probed with carbohydrate binding proteins of known or partially known specificities (i.e., cholera toxin B-chain; peanut agglutinin, a monoclonal antibody to sulfatide, Sulph 1; and a polyclonal antiserum reactive to asialo-G(M2)). Preliminary evaluation of the method indicated successful immobilization of the GSLs, and selective binding of test probes. The potential utility of this methodology for designing covalent microarrays that incorporate GSLs for serodiagnosis is discussed.
MIDClass: microarray data classification by association rules and gene expression intervals.
Rosalba Giugno
Full Text Available We present a new classification method for expression profiling data, called MIDClass (Microarray Interval Discriminant CLASSifier, based on association rules. It classifies expressions profiles exploiting the idea that the transcript expression intervals better discriminate subtypes in the same class. A wide experimental analysis shows the effectiveness of MIDClass compared to the most prominent classification approaches.
Determination of strongly overlapping signaling activity from microarray data
Bidaut Ghislain
2006-02-01
Full Text Available Abstract Background As numerous diseases involve errors in signal transduction, modern therapeutics often target proteins involved in cellular signaling. Interpretation of the activity of signaling pathways during disease development or therapeutic intervention would assist in drug development, design of therapy, and target identification. Microarrays provide a global measure of cellular response, however linking these responses to signaling pathways requires an analytic approach tuned to the underlying biology. An ongoing issue in pattern recognition in microarrays has been how to determine the number of patterns (or clusters to use for data interpretation, and this is a critical issue as measures of statistical significance in gene ontology or pathways rely on proper separation of genes into groups. Results Here we introduce a method relying on gene annotation coupled to decompositional analysis of global gene expression data that allows us to estimate specific activity on strongly coupled signaling pathways and, in some cases, activity of specific signaling proteins. We demonstrate the technique using the Rosetta yeast deletion mutant data set, decompositional analysis by Bayesian Decomposition, and annotation analysis using ClutrFree. We determined from measurements of gene persistence in patterns across multiple potential dimensionalities that 15 basis vectors provides the correct dimensionality for interpreting the data. Using gene ontology and data on gene regulation in the Saccharomyces Genome Database, we identified the transcriptional signatures of several cellular processes in yeast, including cell wall creation, ribosomal disruption, chemical blocking of protein synthesis, and, criticially, individual signatures of the strongly coupled mating and filamentation pathways. Conclusion This works demonstrates that microarray data can provide downstream indicators of pathway activity either through use of gene ontology or transcription
Visualization methods for statistical analysis of microarray clusters
Li Kai
2005-05-01
Full Text Available Abstract Background The most common method of identifying groups of functionally related genes in microarray data is to apply a clustering algorithm. However, it is impossible to determine which clustering algorithm is most appropriate to apply, and it is difficult to verify the results of any algorithm due to the lack of a gold-standard. Appropriate data visualization tools can aid this analysis process, but existing visualization methods do not specifically address this issue. Results We present several visualization techniques that incorporate meaningful statistics that are noise-robust for the purpose of analyzing the results of clustering algorithms on microarray data. This includes a rank-based visualization method that is more robust to noise, a difference display method to aid assessments of cluster quality and detection of outliers, and a projection of high dimensional data into a three dimensional space in order to examine relationships between clusters. Our methods are interactive and are dynamically linked together for comprehensive analysis. Further, our approach applies to both protein and gene expression microarrays, and our architecture is scalable for use on both desktop/laptop screens and large-scale display devices. This methodology is implemented in GeneVAnD (Genomic Visual ANalysis of Datasets and is available at http://function.princeton.edu/GeneVAnD. Conclusion Incorporating relevant statistical information into data visualizations is key for analysis of large biological datasets, particularly because of high levels of noise and the lack of a gold-standard for comparisons. We developed several new visualization techniques and demonstrated their effectiveness for evaluating cluster quality and relationships between clusters.
Mathematical design of prokaryotic clone-based microarrays
Quirijns Elisabeth J
2005-09-01
Full Text Available Abstract Background Clone-based microarrays, on which each spot represents a random genomic fragment, are a good alternative to open reading frame-based microarrays, especially for microorganisms for which the complete genome sequence is not available. Since the generation of a genomic DNA library is a random process, it is beforehand uncertain which genes are represented. Nevertheless, the genome coverage of such an array, which depends on different variables like the insert size and the number of clones in the library, can be predicted by mathematical approaches. When applying the classical formulas that determine the probability that a certain sequence is represented in a DNA library at the nucleotide level, massive amounts of clones would be necessary to obtain a proper coverage of the genome. Results This paper describes the development of two complementary equations for determining the genome coverage at the gene level. The first equation predicts the fraction of genes that is represented on the array in a detectable way and cover at least a set part (the minimal insert coverage of the genomic fragment by which these genes are represented. The higher this minimal insert coverage, the larger the chance that changes in expression of a specific gene can be detected and attributed to that gene. The second equation predicts the fraction of genes that is represented in spots on the array that only represent genes from a single transcription unit, which information can be interpreted in a quantitative way. Conclusion Validation of these equations shows that they form reliable tools supporting optimal design of prokaryotic clone-based microarrays.
Fields, Matthew W.
2005-06-01
One of the major goals of the project is to construct whole-genome microarrays for Desulfovibrio vulgaris. Previous whole-genome microarrays constructed at ORNL have been PCR-amplimer based, and we wanted to re-evaluate the type of microarrays being built because oligonucleotide probes have several advantages. Microarrays have been generally constructed with two types of probes, PCR-generated probes that typically range in size between 200 and 2000 bp, and oligonucleotide probes with typical size of 20-70 nt. Producing PCR product-based DNA arrays can be a time-consuming procedure that includes PCR primer design, amplification, size verification, product purification, and product quantification. Also, some ORFs are difficult to amplify and thus the construction of comprehensive arrays can be a challenge. Recently, to alleviate some of the problems associated with PCR product-based microarrays, oligonucleotide microarrays that contain probes longer than 40 nt have been evaluated and used for whole genome expression studies. These microarrays should have higher specificity and are easy to construct, and can thus provide an important alternative approach to monitor gene expression. However, due to the smaller probe size, it is expected that the detection sensitivity of oligonucleotide arrays will be lower than PCR product-based probes.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Ontology-based retrieval of bio-medical information based on microarray text corpora
Hansen, Kim Allan; Zambach, Sine; Have, Christian Theil
Microarray technology is often used in gene expression exper- iments. Information retrieval in the context of microarrays has mainly been concerned with the analysis of the numeric data produced; how- ever, the experiments are often annotated with textual metadata. Al- though biomedical resources...... are exponentially growing, the text corpora are sparse and inconsistent in spite of attempts to standardize the format. Ordinary keyword search may in some cases be insucient to nd rele- vant information and the potential benet of using a semantic approach in this context has only been investigated to a limited...
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data
Tan, Qihua; Thomassen, Mads; Burton, Mark
2017-01-01
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering...... the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...... time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health....
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data
Tan, Qihua; Thomassen, Mads; Burton, Mark
2017-01-01
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering...... the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...
Jianping Hua
2004-01-01
Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.
Rational approximations to fluid properties
Kincaid, J. M.
1990-05-01
The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function tilde p(T,rho) that contains a set of parameters (gamma sub i); the (gamma sub i) is chosen such that tilde p(T,rho) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and rho is the density). In most cases, a nonlinear least-squares numerical method is used to determine (gamma sub i). There are several drawbacks to this method: one has essentially to guess what tilde p(T,rho) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular, it lets the data choose the function tilde p(T,rho) and its numerical implementation involves only linear algorithms.
Post-normalization quality assessment visualization of microarray data
McClure, John; Wit, Ernst
2003-01-01
Post-normalization checking of microarrays rarely occurs, despite the problems that using unreliable data for inference can cause. This paper considers a number of different ways to check microarrays after normalization for a variety of potential problems. Four types of problem with microarray data
SIMAGE : simulation of DNA-microarray gene expression data
Albers, Casper J.; Jansen, Ritsert C.; Kok, Jan; Kuipers, Oscar P.; Hijum, Sacha A.F.T. van
2006-01-01
Simulation of DNA-microarray data serves at least three purposes: (i) optimizing the design of an intended DNA microarray experiment, (ii) comparing existing pre-processing and processing methods for best analysis of a given DNA microarray experiment, (iii) educating students, lab-workers and other
Improved microarray-based decision support with graph encoded interactome data.
Anneleen Daemen
Full Text Available In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG, protein-protein interactions (OPHID and miRNA-gene targeting (microRNA.org outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.
Norihisa Ishii
2013-10-01
Full Text Available The microflora in environmental water consists of a high density and diversity of bacterial species that form the foundation of the water ecosystem. Because the majority of these species cannot be cultured in vitro, a different approach is needed to identify prokaryotes in environmental water. A novel DNA microarray was developed as a simplified detection protocol. Multiple DNA probes were designed against each of the 97,927 sequences in the DNA Data Bank of Japan and mounted on a glass chip in duplicate. Evaluation of the microarray was performed using the DNA extracted from one liter of environmental water samples collected from seven sites in Japan. The extracted DNA was uniformly amplified using whole genome amplification (WGA, labeled with Cy3-conjugated 16S rRNA specific primers and hybridized to the microarray. The microarray successfully identified soil bacteria and environment-specific bacteria clusters. The DNA microarray described herein can be a useful tool in evaluating the diversity of prokaryotes and assessing environmental changes such as global warming.
An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays
Laurenzi Ian J
2009-12-01
Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.
Generation of a non-small cell lung cancer transcriptome microarray
Johnston Patrick G
2008-05-01
Full Text Available Abstract Background Non-small cell lung cancer (NSCLC is the leading cause of cancer mortality worldwide. At present no reliable biomarkers are available to guide the management of this condition. Microarray technology may allow appropriate biomarkers to be identified but present platforms are lacking disease focus and are thus likely to miss potentially vital information contained in patient tissue samples. Methods A combination of large-scale in-house sequencing, gene expression profiling and public sequence and gene expression data mining were used to characterise the transcriptome of NSCLC and the data used to generate a disease-focused microarray – the Lung Cancer DSA research tool. Results Built on the Affymetrix GeneChip platform, the Lung Cancer DSA research tool allows for interrogation of ~60,000 transcripts relevant to Lung Cancer, tens of thousands of which are unavailable on leading commercial microarrays. Conclusion We have developed the first high-density disease specific transcriptome microarray. We present the array design process and the results of experiments carried out to demonstrate the array's utility. This approach serves as a template for the development of other disease transcriptome microarrays, including non-neoplastic diseases.
MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.
Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris
2016-03-03
cDNA microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images which often suffer from noise, artifacts, and uneven background. In this work, the MIGS-GPU (Microarray Image Gridding and Segmentation on GPU) software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the graphics processing unit (GPU) by means of the CUDA architecture in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a userfriendly interface that requires minimum input in order to run.
Improved statistical analysis of budding yeast TAG microarrays revealed by defined spike-in pools.
Peyser, Brian D; Irizarry, Rafael A; Tiffany, Carol W; Chen, Ou; Yuan, Daniel S; Boeke, Jef D; Spencer, Forrest A
2005-09-15
Saccharomyces cerevisiae knockout collection TAG microarrays are an emergent platform for rapid, genome-wide functional characterization of yeast genes. TAG arrays report abundance of unique oligonucleotide 'TAG' sequences incorporated into each deletion mutation of the yeast knockout collection, allowing measurement of relative strain representation across experimental conditions for all knockout mutants simultaneously. One application of TAG arrays is to perform genome-wide synthetic lethality screens, known as synthetic lethality analyzed by microarray (SLAM). We designed a fully defined spike-in pool to resemble typical SLAM experiments and performed TAG microarray hybridizations. We describe a method for analyzing two-color array data to efficiently measure the differential knockout strain representation across two experimental conditions, and use the spike-in pool to show that the sensitivity and specificity of this method exceed typical current approaches.
Detecting Outlier Microarray Arrays by Correlation and Percentage of Outliers Spots
Song Yang
2006-01-01
Full Text Available We developed a quality assurance (QA tool, namely microarray outlier filter (MOF, and have applied it to our microarray datasets for the identification of problematic arrays. Our approach is based on the comparison of the arrays using the correlation coefficient and the number of outlier spots generated on each array to reveal outlier arrays. For a human universal reference (HUR dataset, which is used as a technical control in our standard hybridization procedure, 3 outlier arrays were identified out of 35 experiments. For a human blood dataset, 12 outlier arrays were identified from 185 experiments. In general, arrays from human blood samples displayed greater variation in their gene expression profiles than arrays from HUR samples. As a result, MOF identified two distinct patterns in the occurrence of outlier arrays. These results demonstrate that this methodology is a valuable QA practice to identify questionable microarray data prior to downstream analysis.
Noma, Hisashi; Matsui, Shigeyuki
2013-05-20
The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.
Fuzzy Logic for Elimination of Redundant Information of Microarray Data
Edmundo Bonilla Huerta; Béatrice Duval; Jin-Kao Hao
2008-01-01
Gene subset selection is essential for classification and analysis of microarray data. However, gene selection is known to be a very difficult task since gene expression data not only have high dimensionalities, but also contain redundant information and noises. To cope with these difficulties, this paper introduces a fuzzy logic based pre-processing approach composed of two main steps. First, we use fuzzy inference rules to transform the gene expression levels of a given dataset into fuzzy values. Then we apply a similarity relation to these fuzzy values to define fuzzy equivalence groups, each group containing strongly similar genes. Dimension reduction is achieved by considering for each group of similar genes a single representative based on mutual information. To assess the usefulness of this approach, extensive experimentations were carried out on three well-known public datasets with a combined classification model using three statistic filters and three classifiers.
Asynchronous stochastic approximation with differential inclusions
David S. Leslie
2012-01-01
Full Text Available The asymptotic pseudo-trajectory approach to stochastic approximation of Benaïm, Hofbauer and Sorin is extended for asynchronous stochastic approximations with a set-valued mean field. The asynchronicity of the process is incorporated into the mean field to produce convergence results which remain similar to those of an equivalent synchronous process. In addition, this allows many of the restrictive assumptions previously associated with asynchronous stochastic approximation to be removed. The framework is extended for a coupled asynchronous stochastic approximation process with set-valued mean fields. Two-timescales arguments are used here in a similar manner to the original work in this area by Borkar. The applicability of this approach is demonstrated through learning in a Markov decision process.
Semiclassical approximations to quantum time correlation functions
Egorov, S. A.; Skinner, J. L.
1998-09-01
Over the last 40 years several ad hoc semiclassical approaches have been developed in order to obtain approximate quantum time correlation functions, using as input only the corresponding classical time correlation functions. The accuracy of these approaches has been tested for several exactly solvable gas-phase models. In this paper we test the accuracy of these approaches by comparing to an exactly solvable many-body condensed-phase model. We show that in the frequency domain the Egelstaff approach is the most accurate, especially at high frequencies, while in the time domain one of the other approaches is more accurate.
Extraction and labeling methods for microarrays using small amounts of plant tissue.
Stimpson, Alexander J; Pereira, Rhea S; Kiss, John Z; Correll, Melanie J
2009-03-01
Procedures were developed to maximize the yield of high-quality RNA from small amounts of plant biomass for microarrays. Two disruption techniques (bead milling and pestle and mortar) were compared for the yield and the quality of RNA extracted from 1-week-old Arabidopsis thaliana seedlings (approximately 0.5-30 mg total biomass). The pestle and mortar method of extraction showed enhanced RNA quality at the smaller biomass samples compared with the bead milling technique, although the quality in the bead milling could be improved with additional cooling steps. The RNA extracted from the pestle and mortar technique was further tested to determine if the small quantity of RNA (500 ng-7 microg) was appropriate for microarray analyses. A new method of low-quantity RNA labeling for microarrays (NuGEN Technologies, Inc.) was used on five 7-day-old seedlings (approximately 2.5 mg fresh weight total) of Arabidopsis that were grown in the dark and exposed to 1 h of red light or continued dark. Microarray analyses were performed on a small plant sample (five seedlings; approximately 2.5 mg) using these methods and compared with extractions performed with larger biomass samples (approximately 500 roots). Many well-known light-regulated genes between the small plant samples and the larger biomass samples overlapped in expression changes, and the relative expression levels of selected genes were confirmed with quantitative real-time polymerase chain reaction, suggesting that these methods can be used for plant experiments where the biomass is extremely limited (i.e. spaceflight studies).
Operators of Approximations and Approximate Power Set Spaces
ZHANG Xian-yong; MO Zhi-wen; SHU Lan
2004-01-01
Boundary inner and outer operators are introduced; and union, intersection, complement operators of approximations are redefined. The approximation operators have a good property of maintaining union, intersection, complement operators, so the rough set theory has been enriched from the operator-oriented and set-oriented views. Approximate power set spaces are defined, and it is proved that the approximation operators are epimorphisms from power set space to approximate power set spaces. Some basic properties of approximate power set space are got by epimorphisms in contrast to power set space.
A systematic sequence of relativistic approximations.
Dyall, Kenneth G
2002-06-01
An approach to the development of a systematic sequence of relativistic approximations is reviewed. The approach depends on the atomically localized nature of relativistic effects, and is based on the normalized elimination of the small component in the matrix modified Dirac equation. Errors in the approximations are assessed relative to four-component Dirac-Hartree-Fock calculations or other reference points. Projection onto the positive energy states of the isolated atoms provides an approximation in which the energy-dependent parts of the matrices can be evaluated in separate atomic calculations and implemented in terms of two sets of contraction coefficients. The errors in this approximation are extremely small, of the order of 0.001 pm in bond lengths and tens of microhartrees in absolute energies. From this approximation it is possible to partition the atoms into relativistic and nonrelativistic groups and to treat the latter with the standard operators of nonrelativistic quantum mechanics. This partitioning is shared with the relativistic effective core potential approximation. For atoms in the second period, errors in the approximation are of the order of a few hundredths of a picometer in bond lengths and less than 1 kJ mol(-1) in dissociation energies; for atoms in the third period, errors are a few tenths of a picometer and a few kilojoule/mole, respectively. A third approximation for scalar relativistic effects replaces the relativistic two-electron integrals with the nonrelativistic integrals evaluated with the atomic Foldy-Wouthuysen coefficients as contraction coefficients. It is similar to the Douglas-Kroll-Hess approximation, and is accurate to about 0.1 pm and a few tenths of a kilojoule/mole. The integrals in all the approximations are no more complicated than the integrals in the full relativistic methods, and their derivatives are correspondingly easy to formulate and evaluate.
Approximation of free-discontinuity problems
Braides, Andrea
1998-01-01
Functionals involving both volume and surface energies have a number of applications ranging from Computer Vision to Fracture Mechanics. In order to tackle numerical and dynamical problems linked to such functionals many approximations by functionals defined on smooth functions have been proposed (using high-order singular perturbations, finite-difference or non-local energies, etc.) The purpose of this book is to present a global approach to these approximations using the theory of gamma-convergence and of special functions of bounded variation. The book is directed to PhD students and researchers in calculus of variations, interested in approximation problems with possible applications.
Approximation of Aggregate Losses Using Simulation
Mohamed A. Mohamed
2010-01-01
Full Text Available Problem statement: The modeling of aggregate losses is one of the main objectives in actuarial theory and practice, especially in the process of making important business decisions regarding various aspects of insurance contracts. The aggregate losses over a fixed time period is often modeled by mixing the distributions of loss frequency and severity, whereby the distribution resulted from this approach is called a compound distribution. However, in many cases, realistic probability distributions for loss frequency and severity cannot be combined mathematically to derive the compound distribution of aggregate losses. Approach: This study aimed to approximate the aggregate loss distribution using simulation approach. In particular, the approximation of aggregate losses was based on a compound Poisson-Pareto distribution. The effects of deductible and policy limit on the individual loss as well as the aggregate losses were also investigated. Results: Based on the results, the approximation of compound Poisson-Pareto distribution via simulation approach agreed with the theoretical mean and variance of each of the loss frequency, loss severity and aggregate losses. Conclusion: This study approximated the compound distribution of aggregate losses using simulation approach. The investigation on retained losses and insurance claims allowed an insured or a company to select an insurance contract that fulfills its requirement. In particular, if a company wants to have an additional risk reduction, it can compare alternative policies by considering the worthiness of the additional expected total cost which can be estimated via simulation approach.
Mortazavi, Atiyeh; Moattar, Mohammad Hossein
2016-01-01
High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI) for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.
Albrecht, Valérie; Chevallier, Anne; Magnone, Virginie; Barbry, Pascal; Vandenbos, Fanny; Bongain, André; Lefebvre, Jean-Claude; Giordanengo, Valérie
2006-11-01
Persistent cervical high-risk human papillomavirus (HPV) infection is correlated with an increased risk of developing a high-grade cervical intraepithelial lesion. A two-step method was developed for detection and genotyping of high-risk HPV. DNA was firstly amplified by asymmetrical PCR in the presence of Cy3-labelled primers and dUTP. Labelled DNA was then genotyped using DNA microarray hybridization. The current study evaluated the technical efficacy of laboratory-designed HPV DNA microarrays for high-risk HPV genotyping on 57 malignant and non-malignant cervical smears. The approach was evaluated for a broad range of cytological samples: high-grade squamous intraepithelial lesions (HSIL), low-grade squamous intraepithelial lesions (LSIL) and atypical squamous cells of high-grade (ASC-H). High-risk HPV was also detected in six atypical squamous cells of undetermined significance (ASC-US) samples; among them only one cervical specimen was found uninfected, associated with no histological lesion. The HPV oligonucleotide DNA microarray genotyping detected 36 infections with a single high-risk HPV type and 5 multiple infections with several high-risk types. Taken together, these results demonstrate the sensitivity and specificity of the HPV DNA microarray approach. This approach could improve clinical management of patients with cervical cytological abnormalities.
Atiyeh Mortazavi
2016-01-01
Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.
Biocompatible Hydrogels for Microarray Cell Printing and Encapsulation
Akshata Datar
2015-10-01
Full Text Available Conventional drug screening processes are a time-consuming and expensive endeavor, but highly rewarding when they are successful. To identify promising lead compounds, millions of compounds are traditionally screened against therapeutic targets on human cells grown on the surface of 96-wells. These two-dimensional (2D cell monolayers are physiologically irrelevant, thus, often providing false-positive or false-negative results, when compared to cells grown in three-dimensional (3D structures such as hydrogel droplets. However, 3D cell culture systems are not easily amenable to high-throughput screening (HTS, thus inherently low throughput, and requiring relatively large volume for cell-based assays. In addition, it is difficult to control cellular microenvironments and hard to obtain reliable cell images due to focus position and transparency issues. To overcome these problems, miniaturized 3D cell cultures in hydrogels were developed via cell printing techniques where cell spots in hydrogels can be arrayed on the surface of glass slides or plastic chips by microarray spotters and cultured in growth media to form cells encapsulated 3D droplets for various cell-based assays. These approaches can dramatically reduce assay volume, provide accurate control over cellular microenvironments, and allow us to obtain clear 3D cell images for high-content imaging (HCI. In this review, several hydrogels that are compatible to microarray printing robots are discussed for miniaturized 3D cell cultures.
Single-Round Patterned DNA Library Microarray Aptamer Lead Identification
Jennifer A. Martin
2015-01-01
Full Text Available A method for identifying an aptamer in a single round was developed using custom DNA microarrays containing computationally derived patterned libraries incorporating no information on the sequences of previously reported thrombin binding aptamers. The DNA library was specifically designed to increase the probability of binding by enhancing structural complexity in a sequence-space confined environment, much like generating lead compounds in a combinatorial drug screening library. The sequence demonstrating the highest fluorescence intensity upon target addition was confirmed to bind the target molecule thrombin with specificity by surface plasmon resonance, and a novel imino proton NMR/2D NOESY combination was used to screen the structure for G-quartet formation. We propose that the lack of G-quartet structure in microarray-derived aptamers may highlight differences in binding mechanisms between surface-immobilized and solution based strategies. This proof-of-principle study highlights the use of a computational driven methodology to create a DNA library rather than a SELEX based approach. This work is beneficial to the biosensor field where aptamers selected by solution based evolution have proven challenging to retain binding function when immobilized on a surface.
Protein microarrays using liquid phase fractionation of cell lysates.
Yan, Fang; Sreekumar, Arun; Laxman, Bharathi; Chinnaiyan, Arul M; Lubman, David M; Barder, Timothy J
2003-07-01
We describe an approach in which protein microarrays are produced using a two-dimensional (2-D) liquid phase fractionation of cell lysates. The method involves a pI-based fractionation using chromatofocusing in the first dimension followed by nonporous reversed-phase high-performance liquid chromatography (HPLC) of each pI fraction in the second dimension. This allows fractionation of cellular proteins in the liquid phase that could then be arrayed on nitrocellulose slides and used to study humoral response in cancer. Protein microarrays have been used to identify potential serum biomarkers for prostate cancer. It is shown that specific fractions are immunoreactive against prostate cancer serum but not against serum from healthy individuals. These proteins could serve as sero-diagnostic markers for prostate cancer. Importantly, this method allows for use of post-translationally modified proteins as baits for detection of humoral response. Proteins eliciting an immune response are identified using the molecular mass and peptide sequence data obtained using mass spectrometric analysis of the liquid fractions. The fractionation of proteins in the liquid phase make this method amenable to automation.
Is there a niche for DNA microarrays in molecular diagnostics?
Jordan, Bertrand R
2010-10-01
DNA microarrays, 15 years after their appearance, have achieved presence in a number of medical settings. Several tests have been introduced and have obtained regulatory approval, mostly in the fields of bacterial identification, mutation detection and the global assessment of genome alterations, a particularly successful case being the whole-genome assay of copy-number variations. Gene-expression applications have been less successful because of technical issues (e.g., reproducibility, platform-to-platform consistency and statistical issues in data analysis) and difficulties in demonstrating the clinical utility of expression signatures. In their different applications, DNA arrays have faced competition from PCR-based assays for low and intermediate multiplicity. Now they have a new competitor, new-generation sequencing, that can provide a wealth of direct sequence information, or digital gene-expression data, at a constantly decreasing cost. In this article we evaluate the strengths and weaknesses of the DNA microarray approach to diagnostics, and highlight the fields in which it is most likely to achieve a durable presence.
Cluster stability scores for microarray data in cancer studies
Ghosh Debashis
2003-09-01
Full Text Available Abstract Background A potential benefit of profiling of tissue samples using microarrays is the generation of molecular fingerprints that will define subtypes of disease. Hierarchical clustering has been the primary analytical tool used to define disease subtypes from microarray experiments in cancer settings. Assessing cluster reliability poses a major complication in analyzing output from clustering procedures. While most work has focused on estimating the number of clusters in a dataset, the question of stability of individual-level clusters has not been addressed. Results We address this problem by developing cluster stability scores using subsampling techniques. These scores exploit the redundancy in biologically discriminatory information on the chip. Our approach is generic and can be used with any clustering method. We propose procedures for calculating cluster stability scores for situations involving both known and unknown numbers of clusters. We also develop cluster-size adjusted stability scores. The method is illustrated by application to data three cancer studies; one involving childhood cancers, the second involving B-cell lymphoma, and the final is from a malignant melanoma study. Availability Code implementing the proposed analytic method can be obtained at the second author's website.
Viral diagnosis in Indian livestock using customized microarray chips.
Yadav, Brijesh S; Pokhriyal, Mayank; Ratta, Barkha; Kumar, Ajay; Saxena, Meeta; Sharma, Bhaskar
2015-01-01
Viral diagnosis in Indian livestock using customized microarray chips is gaining momentum in recent years. Hence, it is possible to design customized microarray chip for viruses infecting livestock in India. Customized microarray chips identified Bovine herpes virus-1 (BHV-1), Canine Adeno Virus-1 (CAV-1), and Canine Parvo Virus-2 (CPV-2) in clinical samples. Microarray identified specific probes were further confirmed using RT-PCR in all clinical and known samples. Therefore, the application of microarray chips during viral disease outbreaks in Indian livestock is possible where conventional methods are unsuitable. It should be noted that customized application requires a detailed cost efficiency calculation.
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Immobilization Techniques for Microarray: Challenges and Applications
Satish Balasaheb Nimse
2014-11-01
Full Text Available The highly programmable positioning of molecules (biomolecules, nanoparticles, nanobeads, nanocomposites materials on surfaces has potential applications in the fields of biosensors, biomolecular electronics, and nanodevices. However, the conventional techniques including self-assembled monolayers fail to position the molecules on the nanometer scale to produce highly organized monolayers on the surface. The present article elaborates different techniques for the immobilization of the biomolecules on the surface to produce microarrays and their diagnostic applications. The advantages and the drawbacks of various methods are compared. This article also sheds light on the applications of the different technologies for the detection and discrimination of viral/bacterial genotypes and the detection of the biomarkers. A brief survey with 115 references covering the last 10 years on the biological applications of microarrays in various fields is also provided.
Protein microarrays: applications and future challenges.
Stoll, Dieter; Templin, Markus F; Bachmann, Jutta; Joos, Thomas O
2005-03-01
Within the last decade protein microarray technology has been successfully applied for the simultaneous identification, quantification and functional analysis of proteins in basic and applied proteome research. These miniaturized and parallelized assay systems have the potential to replace state-of-the-art singleplex analysis systems. However, prior to their general application in robust, reliable, routine and high-throughput applications it is mandatory that they demonstrate robustness, sensitivity, automation and appropriate pricing. In this review, the current state of protein microarray technology will be summarized. Recent applications for the simultaneous determination of a variety of parameters using only minute amounts of sample will be described and future challenges of this cutting-edge technology will be discussed.
Plasmonically amplified fluorescence bioassay with microarray format
Gogalic, S.; Hageneder, S.; Ctortecka, C.; Bauch, M.; Khan, I.; Preininger, Claudia; Sauer, U.; Dostalek, J.
2015-05-01
Plasmonic amplification of fluorescence signal in bioassays with microarray detection format is reported. A crossed relief diffraction grating was designed to couple an excitation laser beam to surface plasmons at the wavelength overlapping with the absorption and emission bands of fluorophore Dy647 that was used as a label. The surface of periodically corrugated sensor chip was coated with surface plasmon-supporting gold layer and a thin SU8 polymer film carrying epoxy groups. These groups were employed for the covalent immobilization of capture antibodies at arrays of spots. The plasmonic amplification of fluorescence signal on the developed microarray chip was tested by using interleukin 8 sandwich immunoassay. The readout was performed ex situ after drying the chip by using a commercial scanner with high numerical aperture collecting lens. Obtained results reveal the enhancement of fluorescence signal by a factor of 5 when compared to a regular glass chip.
PMD: A Resource for Archiving and Analyzing Protein Microarray data.
Xu, Zhaowei; Huang, Likun; Zhang, Hainan; Li, Yang; Guo, Shujuan; Wang, Nan; Wang, Shi-Hua; Chen, Ziqing; Wang, Jingfang; Tao, Sheng-Ce
2016-01-27
Protein microarray is a powerful technology for both basic research and clinical study. However, because there is no database specifically tailored for protein microarray, the majority of the valuable original protein microarray data is still not publically accessible. To address this issue, we constructed Protein Microarray Database (PMD), which is specifically designed for archiving and analyzing protein microarray data. In PMD, users can easily browse and search the entire database by experimental name, protein microarray type, and sample information. Additionally, PMD integrates several data analysis tools and provides an automated data analysis pipeline for users. With just one click, users can obtain a comprehensive analysis report for their protein microarray data. The report includes preliminary data analysis, such as data normalization, candidate identification, and an in-depth bioinformatics analysis of the candidates, which include functional annotation, pathway analysis, and protein-protein interaction network analysis. PMD is now freely available at www.proteinmicroarray.cn.
Microarray for serotyping of Bartonella species
Raoult Didier; Nappez Claude; Bonhomme Cyrille J
2007-01-01
Abstract Background Bacteria of the genus Bartonella are responsible for a large variety of human and animal diseases. Serological typing of Bartonella is a method that can be used for differentiation and identification of Bartonella subspecies. Results We have developed a novel multiple antigenic microarray to serotype Bartonella strains and to select poly and monoclonal antibodies. It was validated using mouse polyclonal antibodies against 29 Bartonella strains. We then tested the microarra...
Undetected sex chromosome aneuploidy by chromosomal microarray.
Markus-Bustani, Keren; Yaron, Yuval; Goldstein, Myriam; Orr-Urtreger, Avi; Ben-Shachar, Shay
2012-11-01
We report on a case of a female fetus found to be mosaic for Turner syndrome (45,X) and trisomy X (47,XXX). Chromosomal microarray analysis (CMA) failed to detect the aneuploidy because of a normal average dosage of the X chromosome. This case represents an unusual instance in which CMA may not detect chromosomal aberrations. Such a possibility should be taken into consideration in similar cases where CMA is used in a clinical setting.
Hybridization thermodynamics of NimbleGen Microarrays
Posekany Alexandra
2010-01-01
Full Text Available Abstract Background While microarrays are the predominant method for gene expression profiling, probe signal variation is still an area of active research. Probe signal is sequence dependent and affected by probe-target binding strength and the competing formation of probe-probe dimers and secondary structures in probes and targets. Results We demonstrate the benefits of an improved model for microarray hybridization and assess the relative contributions of the probe-target binding strength and the different competing structures. Remarkably, specific and unspecific hybridization were apparently driven by different energetic contributions: For unspecific hybridization, the melting temperature Tm was the best predictor of signal variation. For specific hybridization, however, the effective interaction energy that fully considered competing structures was twice as powerful a predictor of probe signal variation. We show that this was largely due to the effects of secondary structures in the probe and target molecules. The predictive power of the strength of these intramolecular structures was already comparable to that of the melting temperature or the free energy of the probe-target duplex. Conclusions This analysis illustrates the importance of considering both the effects of probe-target binding strength and the different competing structures. For specific hybridization, the secondary structures of probe and target molecules turn out to be at least as important as the probe-target binding strength for an understanding of the observed microarray signal intensities. Besides their relevance for the design of new arrays, our results demonstrate the value of improving thermodynamic models for the read-out and interpretation of microarray signals.
Weighted analysis of general microarray experiments
Kristiansson Erik
2007-10-01
Full Text Available Abstract Background In DNA microarray experiments, measurements from different biological samples are often assumed to be independent and to have identical variance. For many datasets these assumptions have been shown to be invalid and typically lead to too optimistic p-values. A method called WAME has been proposed where a variance is estimated for each sample and a covariance is estimated for each pair of samples. The current version of WAME is, however, limited to experiments with paired design, e.g. two-channel microarrays. Results The WAME procedure is extended to general microarray experiments, making it capable of handling both one- and two-channel datasets. Two public one-channel datasets are analysed and WAME detects both unequal variances and correlations. WAME is compared to other common methods: fold-change ranking, ordinary linear model with t-tests, LIMMA and weighted LIMMA. The p-value distributions are shown to differ greatly between the examined methods. In a resampling-based simulation study, the p-values generated by WAME are found to be substantially more correct than the alternatives when a relatively small proportion of the genes is regulated. WAME is also shown to have higher power than the other methods. WAME is available as an R-package. Conclusion The WAME procedure is generalized and the limitation to paired-design microarray datasets is removed. The examined other methods produce invalid p-values in many cases, while WAME is shown to produce essentially valid p-values when a relatively small proportion of genes is regulated. WAME is also shown to have higher power than the examined alternative methods.
Microarray for serotyping of Bartonella species
Raoult Didier
2007-06-01
Full Text Available Abstract Background Bacteria of the genus Bartonella are responsible for a large variety of human and animal diseases. Serological typing of Bartonella is a method that can be used for differentiation and identification of Bartonella subspecies. Results We have developed a novel multiple antigenic microarray to serotype Bartonella strains and to select poly and monoclonal antibodies. It was validated using mouse polyclonal antibodies against 29 Bartonella strains. We then tested the microarray for serotyping of Bartonella strains and defining the profile of monoclonal antibodies. Bartonella strains gave a strong positive signal and all were correctly identified. Screening of monoclonal antibodies towards the Gro EL protein of B. clarridgeiae identified 3 groups of antibodies, which were observed with variable affinities against Bartonella strains. Conclusion We demonstrated that microarray of spotted bacteria can be a practical tool for serotyping of unidentified strains or species (and also for affinity determination by polyclonal and monoclonal antibodies. This could be used in research and for identification of bacterial strains.
Chicken sperm transcriptome profiling by microarray analysis.
Singh, R P; Shafeeque, C M; Sharma, S K; Singh, R; Mohan, J; Sastry, K V H; Saxena, V K; Azeez, P A
2016-03-01
It has been confirmed that mammalian sperm contain thousands of functional RNAs, and some of them have vital roles in fertilization and early embryonic development. Therefore, we attempted to characterize transcriptome of the sperm of fertile chickens using microarray analysis. Spermatozoal RNA was pooled from 10 fertile males and used for RNA preparation. Prior to performing the microarray, RNA quality was assessed using a bioanalyzer, and gDNA and somatic cell RNA contamination was assessed by CD4 and PTPRC gene amplification. The chicken sperm transcriptome was cross-examined by analysing sperm and testes RNA on a 4 × 44K chicken array, and results were verified by RT-PCR. Microarray analysis identified 21,639 predominantly nuclear-encoded transcripts in chicken sperm. The majority (66.55%) of the sperm transcripts were shared with the testes, while surprisingly, 33.45% transcripts were detected (raw signal intensity greater than 50) only in the sperm and not in the testes. The greatest proportion of up-regulated transcripts were responsible for signal transduction (63.20%) followed by embryonic development (56.76%) and cell structure (56.25%). Of the 20 most abundant transcripts, 18 remain uncharacterized, whereas the least abundant genes were mostly associated with the ribosome. These findings lay a foundation for more detailed investigations on sperm RNAs in chickens to identify sperm-based biomarkers for fertility.
Nanoparticle probes and mid-infrared chemical imaging for DNA microarray detection.
Mossoba, Magdi M; Al-Khaldi, Sufian F; Schoen, Brianna; Yakes, Betsy Jean
2010-11-01
To date most mid-infrared spectroscopic studies have been limited, due to lack of sensitivity, to the structural characterization of a single oligonucleotide probe immobilized over the entire surface of a gold-coated slide or other infrared substrate. By contrast, widely used and commercially available glass slides and a microarray spotter that prints approximately 120-μm-diameter DNA spots were employed in the present work. To our knowledge, mid-infrared chemical imaging (IRCI) in the external reflection mode has been applied in the present study for the first time to the detection of nanostructure-based DNA microarrays spotted on glass slides. Alkyl amine-modified oligonucleotide probes were immobilized on glass slides that had been prefunctionalized with succinimidyl ester groups. This molecular fluorophore-free method entailed the binding of gold-nanoparticle-streptavidin conjugates to biotinylated DNA targets. Hybridization was visualized by the silver enhancement of gold nanoparticles. The adlayer of silver, selectively bound only to hybridized spots in a microarray, formed the external reflective infrared substrate that was necessary for the detection of DNA hybridization by IRCI in the present proof-of-concept study. IRCI made it possible to discriminate between diffuse and specular external reflection modes. The promising qualitative results are presented herein, and the implications for quantitative determination of DNA microarrays are discussed.
Detection and correction of probe-level artefacts on microarrays
Petri Tobias
2012-05-01
Full Text Available Abstract Background A recent large-scale analysis of Gene Expression Omnibus (GEO data found frequent evidence for spatial defects in a substantial fraction of Affymetrix microarrays in the GEO. Nevertheless, in contrast to quality assessment, artefact detection is not widely used in standard gene expression analysis pipelines. Furthermore, although approaches have been proposed to detect diverse types of spatial noise on arrays, the correction of these artefacts is mostly left to either summarization methods or the corresponding arrays are completely discarded. Results We show that state-of-the-art robust summarization procedures are vulnerable to artefacts on arrays and cannot appropriately correct for these. To address this problem, we present a simple approach to detect artefacts with high recall and precision, which we further improve by taking into account the spatial layout of arrays. Finally, we propose two correction methods for these artefacts that either substitute values of defective probes using probeset information or filter corrupted probes. We show that our approach can identify and correct defective probe measurements appropriately and outperforms existing tools. Conclusions While summarization is insufficient to correct for defective probes, this problem can be addressed in a straightforward way by the methods we present for identification and correction of defective probes. As these methods output CEL files with corrected probe values that serve as input to standard normalization and summarization procedures, they can be easily integrated into existing microarray analysis pipelines as an additional pre-processing step. An R package is freely available from http://www.bio.ifi.lmu.de/artefact-correction.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Ferhatosmanoglu Nilgun
2009-09-01
Full Text Available Abstract Background Dual-channel microarray experiments are commonly employed for inference of differential gene expressions across varying organisms and experimental conditions. The design of dual-channel microarray experiments that can help minimize the errors in the resulting inferences has recently received increasing attention. However, a general and scalable search tool and a corresponding database of optimal designs were still missing. Description An efficient and scalable search method for finding near-optimal dual-channel microarray designs, based on a greedy hill-climbing optimization strategy, has been developed. It is empirically shown that this method can successfully and efficiently find near-optimal designs. Additionally, an improved interwoven loop design construction algorithm has been developed to provide an easily computable general class of near-optimal designs. Finally, in order to make the best results readily available to biologists, a continuously evolving catalog of near-optimal designs is provided. Conclusion A new search algorithm and database for near-optimal microarray designs have been developed. The search tool and the database are accessible via the World Wide Web at http://db.cse.ohio-state.edu/MicroarrayDesigner. Source code and binary distributions are available for academic use upon request.
Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays*
Sturino, Joseph; Zorych, Ivan; Mallick, Bani; Pokusaeva, Karina; Chang, Ying-Ying; Carroll, Raymond J.; Bliznuyk, Nikolay
2010-01-01
We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second ap...
Nonlinear Approximation Using Gaussian Kernels
Hangelbroek, Thomas
2009-01-01
It is well-known that non-linear approximation has an advantage over linear schemes in the sense that it provides comparable approximation rates to those of the linear schemes, but to a larger class of approximands. This was established for spline approximations and for wavelet approximations, and more recently for homogeneous radial basis function (surface spline) approximations. However, no such results are known for the Gaussian function. The crux of the difficulty lies in the necessity to vary the tension parameter in the Gaussian function spatially according to local information about the approximand: error analysis of Gaussian approximation schemes with varying tension are, by and large, an elusive target for approximators. We introduce and analyze in this paper a new algorithm for approximating functions using translates of Gaussian functions with varying tension parameters. Our scheme is sophisticated to a degree that it employs even locally Gaussians with varying tensions, and that it resolves local ...
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
Approximation by Multivariate Singular Integrals
Anastassiou, George A
2011-01-01
Approximation by Multivariate Singular Integrals is the first monograph to illustrate the approximation of multivariate singular integrals to the identity-unit operator. The basic approximation properties of the general multivariate singular integral operators is presented quantitatively, particularly special cases such as the multivariate Picard, Gauss-Weierstrass, Poisson-Cauchy and trigonometric singular integral operators are examined thoroughly. This book studies the rate of convergence of these operators to the unit operator as well as the related simultaneous approximation. The last cha
Approximations of fractional Brownian motion
Li, Yuqiang; 10.3150/10-BEJ319
2012-01-01
Approximations of fractional Brownian motion using Poisson processes whose parameter sets have the same dimensions as the approximated processes have been studied in the literature. In this paper, a special approximation to the one-parameter fractional Brownian motion is constructed using a two-parameter Poisson process. The proof involves the tightness and identification of finite-dimensional distributions.
Approximation by planar elastic curves
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Whole genome microarray analysis, from neonatal blood cards
Hogan Michael E
2009-07-01
Full Text Available Abstract Background Neonatal blood, obtained from a heel stick and stored dry on paper cards, has been the standard for birth defects screening for 50 years. Such dried blood samples are used, primarily, for analysis of small-molecule analytes. More recently, the DNA complement of such dried blood cards has been used for targeted genetic testing, such as for single nucleotide polymorphism in cystic fibrosis. Expansion of such testing to include polygenic traits, and perhaps whole genome scanning, has been discussed as a formal possibility. However, until now the amount of DNA that might be obtained from such dried blood cards has been limiting, due to inefficient DNA recovery technology. Results A new technology is employed for efficient DNA release from a standard neonatal blood card. Using standard Guthrie cards, stored an average of ten years post-collection, about 1/40th of the air-dried neonatal blood specimen (two 3 mm punches was processed to obtain DNA that was sufficient in mass and quality for direct use in microarray-based whole genome scanning. Using that same DNA release technology, it is also shown that approximately 1/250th of the original purified DNA (about 1 ng could be subjected to whole genome amplification, thus yielding an additional microgram of amplified DNA product. That amplified DNA product was then used in microarray analysis and yielded statistical concordance of 99% or greater to the primary, unamplified DNA sample. Conclusion Together, these data suggest that DNA obtained from less than 10% of a standard neonatal blood specimen, stored dry for several years on a Guthrie card, can support a program of genome-wide neonatal genetic testing.
Rowland, Jessica M.; Grau, Frederic R.; McIntosh, Michael T.
2017-01-01
Several RT-PCR and genome sequencing strategies exist for the resolution of Foot-and-Mouth Disease virus (FMDV). While these approaches are relatively straightforward, they can be vulnerable to failure due to the unpredictable nature of FMDV genome sequence variations. Sequence independent single primer amplification (SISPA) followed by genotyping microarray offers an attractive unbiased approach to FMDV characterization. Here we describe a custom FMDV microarray and a companion feature and template-assisted assembler software (FAT-assembler) capable of resolving virus genome sequence using a moderate number of conserved microarray features. The results demonstrate that this approach may be used to rapidly characterize naturally occurring FMDV as well as an engineered chimeric strain of FMDV. The FAT-assembler, while applied to resolving FMDV genomes, represents a new bioinformatics approach that should be broadly applicable to interpreting microarray genotyping data for other viruses or target organisms. PMID:28045937
Boopathi, Pon Arunachalam; Subudhi, Amit Kumar; Middha, Sheetal; Acharya, Jyoti; Mugasimangalam, Raja Chinnadurai; Kochar, Sanjay Kumar; Kochar, Dhanpat Kumar; Das, Ashis
2016-12-01
High density oligonucleotide microarrays have been used on Plasmodium vivax field isolates to estimate whole genome expression. However, no microarray platform has been experimentally optimized for studying the transcriptome of field isolates. In the present study, we adopted both bioinformatics and experimental testing approaches to select best optimized probes suitable for detecting parasite transcripts from field samples and included them in designing a custom 15K P. vivax microarray. This microarray has long oligonucleotide probes (60mer) that were in-situ synthesized onto glass slides using Agilent SurePrint technology and has been developed into an 8X15K format (8 identical arrays on a single slide). Probes in this array were experimentally validated and represents 4180 P. vivax genes in sense orientation, of which 1219 genes have also probes in antisense orientation. Validation of the 15K array by using field samples (n=14) has shown 99% of parasite transcript detection from any of the samples. Correlation analysis between duplicate probes (n=85) present in the arrays showed perfect correlation (r(2)=0.98) indicating the reproducibility. Multiple probes representing the same gene exhibited similar kind of expression pattern across the samples (positive correlation, r≥0.6). Comparison of hybridization data with the previous studies and quantitative real-time PCR experiments were performed to highlight the microarray validation procedure. This array is unique in its design, and results indicate that the array is sensitive and reproducible. Hence, this microarray could be a valuable functional genomics tool to generate reliable expression data from P. vivax field isolates.
Woodward Martin J
2008-01-01
Full Text Available Abstract Background Microarray based comparative genomic hybridisation (CGH experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.
Boopathi, Pon Arunachalam
2016-10-09
High density oligonucleotide microarrays have been used on Plasmodium vivax field isolates to estimate whole genome expression. However, no microarray platform has been experimentally optimized for studying the transcriptome of field isolates. In the present study, we adopted both bioinformatics and experimental testing approaches to select best optimized probes suitable for detecting parasite transcripts from field samples and included them in designing a custom 15K P. vivax microarray. This microarray has long oligonucleotide probes (60 mer) that were in-situ synthesized onto glass slides using Agilent SurePrint technology and has been developed into an 8X15K format (8 identical arrays on a single slide). Probes in this array were experimentally validated and represents 4180 P. vivax genes in sense orientation, of which 1219 genes have also probes in antisense orientation. Validation of the 15K array by using field samples (n =14) has shown 99% of parasite transcript detection from any of the samples. Correlation analysis between duplicate probes (n = 85) present in the arrays showed perfect correlation (r(2) = 0.98) indicating the reproducibility. Multiple probes representing the same gene exhibited similar kind of expression pattern across the samples (positive correlation, r >= 0.6). Comparison of hybridization data with the previous studies and quantitative real-time PCR experiments were performed to highlight the microarray validation procedure. This array is unique in its design, and results indicate that the array is sensitive and reproducible. Hence, this microarray could be a valuable functional genomics tool to generate reliable expression data from P. vivax field isolates. (C) 2016 Published by Elsevier B.V.
Exploring the use of internal and externalcontrols for assessing microarray technical performance
Game Laurence
2010-12-01
Full Text Available Abstract Background The maturing of gene expression microarray technology and interest in the use of microarray-based applications for clinical and diagnostic applications calls for quantitative measures of quality. This manuscript presents a retrospective study characterizing several approaches to assess technical performance of microarray data measured on the Affymetrix GeneChip platform, including whole-array metrics and information from a standard mixture of external spike-in and endogenous internal controls. Spike-in controls were found to carry the same information about technical performance as whole-array metrics and endogenous "housekeeping" genes. These results support the use of spike-in controls as general tools for performance assessment across time, experimenters and array batches, suggesting that they have potential for comparison of microarray data generated across species using different technologies. Results A layered PCA modeling methodology that uses data from a number of classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes" was used for the assessment of microarray data quality. The controls provide information on multiple stages of the experimental protocol (e.g., hybridization, RNA amplification. External spike-in, hybridization and RNA labeling controls provide information related to both assay and hybridization performance whereas internal endogenous controls provide quality information on the biological sample. We find that the variance of the data generated from the external and internal controls carries critical information about technical performance; the PCA dissection of this variance is consistent with whole-array quality assessment based on a number of quality assurance/quality control (QA/QC metrics. Conclusions These results provide support for the use of both external and internal RNA control data to assess the technical quality of microarray
On the Statics for Micro-Array Data Analysis
Urushibara, Tomoko; Akasaka, Shizu; Ito, Makiko; Suzuki, Tomonori; Miyazaki, Satoru
2010-01-01
Recently after human genome sequence has been determined almost perfectly, more and more researchers have been studying genes in detail. Therefore, we are sure that accumulated gene information for human will be getting more important in the near future to develop customized medicine and to make gene interactions clear. Among plenty of information, micro array might be one of the most important analysis method for genes because it is the technique that can get big amount of the gene expressions data from one time experiment and also can be used for DNA isolation. To get the novel knowledge from micro array data, we need to enrich statistical tools for its data analysis. So far, many mathematical theories and definition have been proposing. However, many of those proposals are tested with strict conditions or customized to data for specific species. In this paper, we reviewed existing typical statistical methods for micro array analysis and discussed the repeatability of the analysis, construction the guideline with more general procedure. First we analyzed the micro array data for TG rats, with statistical methods of family-wise error rate (FWER) control approach and False Discovery Rate (FDR) control approach. As existing report, no significantly different gene could be detected with FWER control approach. On the other hand, we could find several genes significantly with FDR control approach even q=0.5. To find out the reliability of FDR control approach with micro array conditions, we have analyzed 2 more pieces of data from Gene Expression Omnibus (GEO) public database on the web site with SAM in addition to FWER and FDR control approaches. We could find a certain number of significantly different genes with BH method and SAM in the case of q=0.05. However, we have to note that the number and kinds of detected genes are different when we compare our result with the one from the published paper. Even if the same approach is used to analyze the same micro array
Nobumasa Hitoshi
2007-04-01
Full Text Available Abstract Background Mycotoxins are fungal secondary metabolites commonly present in feed and food, and are widely regarded as hazardous contaminants. Citrinin, one of the very well known mycotoxins that was first isolated from Penicillium citrinum, is produced by more than 10 kinds of fungi, and is possibly spread all over the world. However, the information on the action mechanism of the toxin is limited. Thus, we investigated the citrinin-induced genomic response for evaluating its toxicity. Results Citrinin inhibited growth of yeast cells at a concentration higher than 100 ppm. We monitored the citrinin-induced mRNA expression profiles in yeast using the ORF DNA microarray and Oligo DNA microarray, and the expression profiles were compared with those of the other stress-inducing agents. Results obtained from both microarray experiments clustered together, but were different from those of the mycotoxin patulin. The oxidative stress response genes – AADs, FLR1, OYE3, GRE2, and MET17 – were significantly induced. In the functional category, expression of genes involved in "metabolism", "cell rescue, defense and virulence", and "energy" were significantly activated. In the category of "metabolism", genes involved in the glutathione synthesis pathway were activated, and in the category of "cell rescue, defense and virulence", the ABC transporter genes were induced. To alleviate the induced stress, these cells might pump out the citrinin after modification with glutathione. While, the citrinin treatment did not induce the genes involved in the DNA repair. Conclusion Results from both microarray studies suggest that citrinin treatment induced oxidative stress in yeast cells. The genotoxicity was less severe than the patulin, suggesting that citrinin is less toxic than patulin. The reproducibility of the expression profiles was much better with the Oligo DNA microarray. However, the Oligo DNA microarray did not completely overcome cross
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
Exhaustive Search for Fuzzy Gene Networks from Microarray Data
Sokhansanj, B A; Fitch, J P; Quong, J N; Quong, A A
2003-07-07
Recent technological advances in high-throughput data collection allow for the study of increasingly complex systems on the scale of the whole cellular genome and proteome. Gene network models are required to interpret large and complex data sets. Rationally designed system perturbations (e.g. gene knock-outs, metabolite removal, etc) can be used to iteratively refine hypothetical models, leading to a modeling-experiment cycle for high-throughput biological system analysis. We use fuzzy logic gene network models because they have greater resolution than Boolean logic models and do not require the precise parameter measurement needed for chemical kinetics-based modeling. The fuzzy gene network approach is tested by exhaustive search for network models describing cyclin gene interactions in yeast cell cycle microarray data, with preliminary success in recovering interactions predicted by previous biological knowledge and other analysis techniques. Our goal is to further develop this method in combination with experiments we are performing on bacterial regulatory networks.
Hira, Zena M; Trigeorgis, George; Gillies, Duncan F
2014-01-01
Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA) which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.
Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model
Ge-Jin Chu
2014-01-01
Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards
Beisvag Vidar
2007-10-01
Full Text Available Abstract Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly
杨钊; 杜俊; 胡郁; 刘庆峰; 戴礼荣
2011-01-01
在语音识别实际应用中,由于噪声的多样性,会造成训练和测试的失配,导致系统性能下降.特征补偿作为鲁棒性语音识别的一种重要方法,通过对训练和测试环境之间差异的研究,在特征空间中修正语音特征,使得修正后的测试语音特征能够更加接近训练语音特征.本文介绍一种实用的基于环境模型矢量泰勒级数(VTS)近似的特征补偿算法.首先验证传统的VTS离线算法在实际车载环境下的有效性;其次由于离线算法本身运算量很大,为了使其实用化,本文对算法进行改进,使其在提高效率的同时又能够保证与离线时相当的性能.通过实验结果验证,本文提出的实用化VTS算法在识别性能上相当接近离线时最好的性能.%In the environment with various noises, the mismatch between training and testing will result in a great reduction of speech recognition performance. As an effective method for robust speech recognition, feature compensation can refine testing noisy feature to be closer to training feature. In this paper, we proposed a practical feature compensation approach based on Vector Taylor Series (VTS) approximation using explicit model of environmental distortion. Firstly, we test and verify the effectiveness of traditional offline VTS algorithm in real car environment, however this offline algorithm has a large amount of calculation, in order to make it practical , the algorithm has been improved to increase efficiency and keep the performance comparable to the offline condition. From experimental results, performance of the practical VTS algorithm proposed in the paper is much close to the best performance of the offline condition.
Normalization for triple-target microarray experiments
Magniette Frederic
2008-04-01
Full Text Available Abstract Background Most microarray studies are made using labelling with one or two dyes which allows the hybridization of one or two samples on the same slide. In such experiments, the most frequently used dyes are Cy3 and Cy5. Recent improvements in the technology (dye-labelling, scanner and, image analysis allow hybridization up to four samples simultaneously. The two additional dyes are Alexa488 and Alexa494. The triple-target or four-target technology is very promising, since it allows more flexibility in the design of experiments, an increase in the statistical power when comparing gene expressions induced by different conditions and a scaled down number of slides. However, there have been few methods proposed for statistical analysis of such data. Moreover the lowess correction of the global dye effect is available for only two-color experiments, and even if its application can be derived, it does not allow simultaneous correction of the raw data. Results We propose a two-step normalization procedure for triple-target experiments. First the dye bleeding is evaluated and corrected if necessary. Then the signal in each channel is normalized using a generalized lowess procedure to correct a global dye bias. The normalization procedure is validated using triple-self experiments and by comparing the results of triple-target and two-color experiments. Although the focus is on triple-target microarrays, the proposed method can be used to normalize p differently labelled targets co-hybridized on a same array, for any value of p greater than 2. Conclusion The proposed normalization procedure is effective: the technical biases are reduced, the number of false positives is under control in the analysis of differentially expressed genes, and the triple-target experiments are more powerful than the corresponding two-color experiments. There is room for improving the microarray experiments by simultaneously hybridizing more than two samples.
Extended -Regular Sequence for Automated Analysis of Microarray Images
Jin Hee-Jeong
2006-01-01
Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.
Formation and characterization of DNA microarrays at silicon nitride substrates.
Manning, Mary; Redmond, Gareth
2005-01-01
A versatile method for direct, covalent attachment of DNA microarrays at silicon nitride layers, previously deposited by chemical vapor deposition at silicon wafer substrates, is reported. Each microarray fabrication process step, from silicon nitride substrate deposition, surface cleaning, amino-silanation, and attachment of a homobifunctional cross-linking molecule to covalent immobilization of probe oligonucleotides, is defined, characterized, and optimized to yield consistent probe microarray quality, homogeneity, and probe-target hybridization performance. The developed microarray fabrication methodology provides excellent (high signal-to-background ratio) and reproducible responsivity to target oligonucleotide hybridization with a rugged chemical stability that permits exposure of arrays to stringent pre- and posthybridization wash conditions through many sustained cycles of reuse. Overall, the achieved performance features compare very favorably with those of more mature glass based microarrays. It is proposed that this DNA microarray fabrication strategy has the potential to provide a viable route toward the successful realization of future integrated DNA biochips.
Human brain evolution: insights from microarrays.
Preuss, Todd M; Cáceres, Mario; Oldham, Michael C; Geschwind, Daniel H
2004-11-01
Several recent microarray studies have compared gene-expression patterns n humans, chimpanzees and other non-human primates to identify evolutionary changes that contribute to the distinctive cognitive and behavioural characteristics of humans. These studies support the surprising conclusion that the evolution of the human brain involved an upregulation of gene expression relative to non-human primates, a finding that could be relevant to understanding human cerebral physiology and function. These results show how genetic and genomic methods can shed light on the basis of human neural and cognitive specializations, and have important implications for neuroscience, anthropology and medicine.
Miniaturised Spotter-Compatible Multicapillary Stamping Tool for Microarray Printing
Drobyshev, Alexei L.; Verkhodanov, Nikolai N; Zasedatelev, Alexander S.
2007-01-01
Novel microstamping tool for microarray printing is proposed. The tool is capable to spot up to 127 droplets of different solutions in single touch. It is easily compatible with commercially available microarray spotters. The tool is based on multichannel funnel with polypropylene capillaries inserted into its channels. Superior flexibility is achieved by ability to replace any printing capillary of the tool. As a practical implementation, hydrogel-based microarrays were stamped and successfu...
Miniaturised Spotter-Compatible Multicapillary Stamping Tool for Microarray Printing
Drobyshev, A L; Zasedatelev, A S; Drobyshev, Alexei L; Verkhodanov, Nikolai N; Zasedatelev, Alexander S
2007-01-01
Novel microstamping tool for microarray printing is proposed. The tool is capable to spot up to 127 droplets of different solutions in single touch. It is easily compatible with commercially available microarray spotters. The tool is based on multichannel funnel with polypropylene capillaries inserted into its channels. Superior flexibility is achieved by ability to replace any printing capillary of the tool. As a practical implementation, hydrogel-based microarrays were stamped and successfully applied to identify the Mycobacterium tuberculosis drug resistance.
Identification of candidate genes in osteoporosis by integrated microarray analysis
Li, J J; Wang, B. Q.; Fei, Q.; Yang, Y; Li, D.
2017-01-01
Objectives In order to screen the altered gene expression profile in peripheral blood mononuclear cells of patients with osteoporosis, we performed an integrated analysis of the online microarray studies of osteoporosis. Methods We searched the Gene Expression Omnibus (GEO) database for microarray studies of peripheral blood mononuclear cells in patients with osteoporosis. Subsequently, we integrated gene expression data sets from multiple microarray studies to obtain differentially expressed...
Novel R Pipeline for Analyzing Biolog Phenotypic Microarray Data
Vehkala, Minna; Shubin, Mikhail; Connor, Thomas Richard; Thomson, Nicholas R.; Corander, Jukka
2015-01-01
Data produced by Biolog Phenotype MicroArrays are longitudinal measurements of cells' respiration on distinct substrates. We introduce a three-step pipeline to analyze phenotypic microarray data with novel procedures for grouping, normalization and effect identification. Grouping and normalization are standard problems in the analysis of phenotype microarrays defined as categorizing bacterial responses into active and non-active, and removing systematic errors from the experimental data, resp...
BDD Minimization for Approximate Computing
Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf
2016-01-01
We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...
Tree wavelet approximations with applications
XU Yuesheng; ZOU Qingsong
2005-01-01
We construct a tree wavelet approximation by using a constructive greedy scheme(CGS). We define a function class which contains the functions whose piecewise polynomial approximations generated by the CGS have a prescribed global convergence rate and establish embedding properties of this class. We provide sufficient conditions on a tree index set and on bi-orthogonal wavelet bases which ensure optimal order of convergence for the wavelet approximations encoded on the tree index set using the bi-orthogonal wavelet bases. We then show that if we use the tree index set associated with the partition generated by the CGS to encode a wavelet approximation, it gives optimal order of convergence.
Factorial and time course designs for cDNA microarray experiments.
Glonek, G F V; Solomon, P J
2004-01-01
Microarrays are powerful tools for surveying the expression levels of many thousands of genes simultaneously. They belong to the new genomics technologies which have important applications in the biological, agricultural and pharmaceutical sciences. There are myriad sources of uncertainty in microarray experiments, and rigorous experimental design is essential for fully realizing the potential of these valuable resources. Two questions frequently asked by biologists on the brink of conducting cDNA or two-colour, spotted microarray experiments are 'Which mRNA samples should be competitively hybridized together on the same slide?' and 'How many times should each slide be replicated?' Early experience has shown that whilst the field of classical experimental design has much to offer this emerging multi-disciplinary area, new approaches which accommodate features specific to the microarray context are needed. In this paper, we propose optimal designs for factorial and time course experiments, which are special designs arising quite frequently in microarray experimentation. Our criterion for optimality is statistical efficiency based on a new notion of admissible designs; our approach enables efficient designs to be selected subject to the information available on the effects of most interest to biologists, the number of arrays available for the experiment, and other resource or practical constraints, including limitations on the amount of mRNA probe. We show that our designs are superior to both the popular reference designs, which are highly inefficient, and to designs incorporating all possible direct pairwise comparisons. Moreover, our proposed designs represent a substantial practical improvement over classical experimental designs which work in terms of standard interactions and main effects. The latter do not provide a basis for meaningful inference on the effects of most interest to biologists, nor make the most efficient use of valuable and limited resources.
Design of an Enterobacteriaceae Pan-genome Microarray Chip
Lukjancenko, Oksana; Ussery, David
2010-01-01
-density microarray chip has been designed, using 116 Enterobacteriaceae genome sequences, taking into account the enteric pan-genome. Probes for the microarray were checked in silico and performance of the chip, based on experimental strains from four different genera, demonstrate a relatively high ability...... to distinguish those strains on genus, species, and pathotype/serovar levels. Additionally, the microarray performed well when investigating which genes were found in a given strain of interest. The Enterobacteriaceae pan-genome microarray, based on 116 genomes, provides a valuable tool for determination...
Chemiluminescence microarrays in analytical chemistry: a critical review.
Seidel, Michael; Niessner, Reinhard
2014-09-01
Multi-analyte immunoassays on microarrays and on multiplex DNA microarrays have been described for quantitative analysis of small organic molecules (e.g., antibiotics, drugs of abuse, small molecule toxins), proteins (e.g., antibodies or protein toxins), and microorganisms, viruses, and eukaryotic cells. In analytical chemistry, multi-analyte detection by use of analytical microarrays has become an innovative research topic because of the possibility of generating several sets of quantitative data for different analyte classes in a short time. Chemiluminescence (CL) microarrays are powerful tools for rapid multiplex analysis of complex matrices. A wide range of applications for CL microarrays is described in the literature dealing with analytical microarrays. The motivation for this review is to summarize the current state of CL-based analytical microarrays. Combining analysis of different compound classes on CL microarrays reduces analysis time, cost of reagents, and use of laboratory space. Applications are discussed, with examples from food safety, water safety, environmental monitoring, diagnostics, forensics, toxicology, and biosecurity. The potential and limitations of research on multiplex analysis by use of CL microarrays are discussed in this review.
A statistical framework for differential network analysis from microarray data
Datta Somnath
2010-02-01
Full Text Available Abstract Background It has been long well known that genes do not act alone; rather groups of genes act in consort during a biological process. Consequently, the expression levels of genes are dependent on each other. Experimental techniques to detect such interacting pairs of genes have been in place for quite some time. With the advent of microarray technology, newer computational techniques to detect such interaction or association between gene expressions are being proposed which lead to an association network. While most microarray analyses look for genes that are differentially expressed, it is of potentially greater significance to identify how entire association network structures change between two or more biological settings, say normal versus diseased cell types. Results We provide a recipe for conducting a differential analysis of networks constructed from microarray data under two experimental settings. At the core of our approach lies a connectivity score that represents the strength of genetic association or interaction between two genes. We use this score to propose formal statistical tests for each of following queries: (i whether the overall modular structures of the two networks are different, (ii whether the connectivity of a particular set of "interesting genes" has changed between the two networks, and (iii whether the connectivity of a given single gene has changed between the two networks. A number of examples of this score is provided. We carried out our method on two types of simulated data: Gaussian networks and networks based on differential equations. We show that, for appropriate choices of the connectivity scores and tuning parameters, our method works well on simulated data. We also analyze a real data set involving normal versus heavy mice and identify an interesting set of genes that may play key roles in obesity. Conclusions Examining changes in network structure can provide valuable information about the
Exponential Approximations Using Fourier Series Partial Sums
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Boundary control of parabolic systems - Finite-element approximation
Lasiecka, I.
1980-01-01
The finite element approximation of a Dirichlet type boundary control problem for parabolic systems is considered. An approach based on the direct approximation of an input-output semigroup formula is applied. Error estimates are derived for optimal state and optimal control, and it is noted that these estimates are actually optimal with respect to the approximation theoretic properties.