WorldWideScience

Sample records for accurate svm-based gene

  1. A New SVM-Based Modeling Method of Cabin Path Loss Prediction

    Xiaonan Zhao; Chunping Hou; Qing Wang

    2013-01-01

    A new modeling method of cabin path loss prediction based on support vector machine (SVM) is proposed in this paper. The method is trained with the path loss values of measured points inside the cabin and can be used to predict the path loss values of the unmeasured points. The experimental results demonstrate that our modeling method is more accurate than the curve fitting method. This SVM-based path loss prediction method makes the prediction much easier and more accurate, which covers perf...

  2. SVM-based glioma grading. Optimization by feature reduction analysis

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values (∝87%) while reducing the number of features by up to 98%. (orig.)

  3. SVM based layout retargeting for fast and regularized inverse lithography

    Kai-sheng LUO; Zheng SHI; Xiao-lang YAN; Zhen GENG

    2014-01-01

    Inverse lithography technology (ILT), also known as pixel-based optical proximity correction (PB-OPC), has shown promising capability in pushing the current 193 nm lithography to its limit. By treating the mask optimization process as an inverse problem in lithography, ILT provides a more complete exploration of the solution space and better pattern fidelity than the tradi-tional edge-based OPC. However, the existing methods of ILT are extremely time-consuming due to the slow convergence of the optimization process. To address this issue, in this paper we propose a support vector machine (SVM) based layout retargeting method for ILT, which is designed to generate a good initial input mask for the optimization process and promote the convergence speed. Supervised by optimized masks of training layouts generated by conventional ILT, SVM models are learned and used to predict the initial pixel values in the‘undefined areas’ of the new layout. By this process, an initial input mask close to the final optimized mask of the new layout is generated, which reduces iterations needed in the following optimization process. Manu-facturability is another critical issue in ILT;however, the mask generated by our layout retargeting method is quite irregular due to the prediction inaccuracy of the SVM models. To compensate for this drawback, a spatial filter is employed to regularize the retargeted mask for complexity reduction. We implemented our layout retargeting method with a regularized level-set based ILT (LSB-ILT) algorithm under partially coherent illumination conditions. Experimental results show that with an initial input mask generated by our layout retargeting method, the number of iterations needed in the optimization process and runtime of the whole process in ILT are reduced by 70.8%and 69.0%, respectively.

  4. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  5. PredSTP: a highly accurate SVM based model to predict sequential cystine stabilized peptides

    Islam, S. M. Ashiqul; Sajed, Tanvir; Kearney, Christopher Michel; Baker, Erich J.

    2015-01-01

    Background Numerous organisms have evolved a wide range of toxic peptides for self-defense and predation. Their effective interstitial and macro-environmental use requires energetic and structural stability. One successful group of these peptides includes a tri-disulfide domain arrangement that offers toxicity and high stability. Sequential tri-disulfide connectivity variants create highly compact disulfide folds capable of withstanding a variety of environmental stresses. Their combination o...

  6. svm PRAT: SVM-based Protein Residue Annotation Toolkit

    Rangwala, Huzefa; Kauffman, Christopher; Karypis, George

    2009-01-01

    Background Over the last decade several prediction methods have been developed for determining the structural and functional properties of individual protein residues using sequence and sequence-derived information. Most of these methods are based on support vector machines as they provide accurate and generalizable prediction models. Results We present a general purpose protein residue annotation toolkit (svm PRAT) to allow biologists to formulate residue-wise prediction problems. svm PRAT f...

  7. Advances in SVM-Based System Using GMM Super Vectors for Text-Independent Speaker Verification

    ZHAO Jian; DONG Yuan; ZHAO Xianyu; YANG Hao; LU Liang; WANG Haila

    2008-01-01

    For text-independent speaker verification,the Gaussian mixture model (GMM) using a universal background model strategy and the GMM using support vector machines are the two most commonly used methodologies.Recently,a new SVM-based speaker verification method using GMM super vectors has been proposed.This paper describes the construction of a new speaker verification system and investigates the use of nuisance attribute projection and test normalization to further enhance performance.Experiments were conducted on the core test of the 2006 NIST speaker recognition evaluation corpus.The experimental results indicate that an SVM-based speaker verification system using GMM super vectors can achieve ap-pealing performance.With the use of nuisance attribute projection and test normalization,the system per-formance can be significantly improved,with improvements in the equal error rate from 7.78% to 4.92% and detection cost function from 0.0376 to 0.0251.

  8. Diagnosis of Elevator Faults with LS-SVM Based on Optimization by K-CV

    Zhou Wan

    2015-01-01

    Full Text Available Several common elevator malfunctions were diagnosed with a least square support vector machine (LS-SVM. After acquiring vibration signals of various elevator functions, their energy characteristics and time domain indicators were extracted by theoretically analyzing the optimal wavelet packet, in order to construct a feature vector of malfunctions for identifying causes of the malfunctions as input of LS-SVM. Meanwhile, parameters about LS-SVM were optimized by K-fold cross validation (K-CV. After diagnosing deviated elevator guide rail, deviated shape of guide shoe, abnormal running of tractor, erroneous rope groove of traction sheave, deviated guide wheel, and tension of wire rope, the results suggested that the LS-SVM based on K-CV optimization was one of effective methods for diagnosing elevator malfunctions.

  9. Accurate and unambiguous tag-to-gene mapping in serial analysis of gene expression

    Melo Francisco

    2006-11-01

    Full Text Available Abstract Background In this study, we present a robust and reliable computational method for tag-to-gene assignment in serial analysis of gene expression (SAGE. The method relies on current genome information and annotation, incorporation of several new features, and key improvements over alternative methods, all of which are important to determine gene expression levels more accurately. The method provides a complete annotation of potential virtual SAGE tags within a genome, along with an estimation of their confidence for experimental observation that ranks tags that present multiple matches in the genome. Results We applied this method to the Saccharomyces cerevisiae genome, producing the most thorough and accurate annotation of potential virtual SAGE tags that is available today for this organism. The usefulness of this method is exemplified by the significant reduction of ambiguous cases in existing experimental SAGE data. In addition, we report new insights from the analysis of existing SAGE data. First, we found that experimental SAGE tags mapping onto introns, intron-exon boundaries, and non-coding RNA elements are observed in all available SAGE data. Second, a significant fraction of experimental SAGE tags was found to map onto genomic regions currently annotated as intergenic. Third, a significant number of existing experimental SAGE tags for yeast has been derived from truncated cDNAs, which are synthesized through oligo-d(T priming to internal poly-(A regions during reverse transcription. Conclusion We conclude that an accurate and unambiguous tag mapping process is essential to increase the quality and the amount of information that can be extracted from SAGE experiments. This is supported by the results obtained here and also by the large impact that the erroneous interpretation of these data could have on downstream applications.

  10. Accurate prediction of secondary metabolite gene clusters in filamentous fungi

    Andersen, Mikael Rørdam; Nielsen, Jakob Blæsbjerg; Klitgaard, Andreas; Petersen, Lene Maj; Zachariasen, Mia; Hansen, Tilde J; Holberg Blicher, Lene; Gotfredsen, Charlotte Held; Larsen, Thomas Ostenfeld; Nielsen, Kristian Fog; Mortensen, Uffe Hasbro

    2013-01-01

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify...... used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom....

  11. PCA-SVM based CAD System for Focal Liver Lesions using B-Mode Ultrasound Images

    Jitendra Virmani

    2013-09-01

    Full Text Available The contribution made by texture of regions inside and outside of the lesions in classification of focal liver lesions (FLLs is investigated in the present work. In order to design an efficient computer-aided diagnostic (CAD system for FLLs, a representative database consisting of images with (1 typical and atypical cases of cyst, hemangioma (HEM and metastatic carcinoma (MET lesions, (2 small as well as large hepatocellular carcinoma (HCC lesions and (3 normal (NOR liver tissue is used. Texture features are computed from regions inside and outside of the lesions. Feature set consisting of 208 texture features, (i.e. 104 texture features and 104 texture ratio features is subjected to principal component analysis (PCA for finding the optimal number of principal components to train a support vector machine (SVM classifier for the classification task. The proposed PCA-SVM based CAD system yielded classification accuracy of 87.2% with the individual class accuracy of 85%, 96%, 90%, 87.5% and 82.2% for NOR, Cyst, HEM, HCC and MET cases respectively. The accuracy for typical, atypical, small HCC and large HCC cases is 87.5%, 86.8%, 88.8%, and 87% respectively. The promising results indicate usefulness of the CAD system for assisting radiologists in diagnosis of FLLs.Defence Science Journal, 2013, 63(5, pp.478-486, DOI:http://dx.doi.org/10.14429/dsj.63.3951

  12. PCA-SVM based CAD System for Focal Liver Lesions using B-mode Ultrasound Images

    Jitendra Virmani

    2013-09-01

    Full Text Available The contribution made by texture of regions inside and outside of the lesions in classification of focal liver lesions (FLLs is investigated in the present work. In order to design an efficient computer-aided diagnostic (CAD system for FLLs, a representative database consisting of images with (1 typical and atypical cases of cyst, hemangioma (HEM and metastatic carcinoma (MET lesions, (2 small as well as large hepatocellular carcinoma (HCC lesions and (3 normal (NOR liver tissue is used. Texture features are computed from regions inside and outside of the lesions. Feature set consisting of 208 texture features, (i.e. 104 texture features and 104 texture ratio features is subjected to principal component analysis (PCA for finding the optimal number of principal components to train a support vector machine (SVM classifier for the classification task. The proposed PCA-SVM based CAD system yielded classification accuracy of 87.2% with the individual class accuracy of 85%, 96%, 90%, 87.5% and 82.2% for NOR, Cyst, HEM, HCC and MET cases respectively. The accuracy for typical, atypical, small HCC and large HCC cases is 87.5%, 86.8%, 88.8%, and 87% respectively. The promising results indicate usefulness of the CAD system for assisting radiologists in diagnosis of FLLs.

  13. PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.

    Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei

    2016-01-01

    Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160

  14. Evaluation of new reference genes in papaya for accurate transcript normalization under different experimental conditions.

    Xiaoyang Zhu

    Full Text Available Real-time reverse transcription PCR (RT-qPCR is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A, TBP1 (TATA binding protein 1 and TBP2 (TATA binding protein 2 genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2, 18S rRNA (18S ribosomal RNA and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental

  15. Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance

    Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro; Lee, Sang Hee

    2015-01-01

    Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which we...

  16. Modeling the milling tool wear by using an evolutionary SVM-based model from milling runs experimental data

    Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade

    2015-12-01

    The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.

  17. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    Ruben Ruiz-Gonzalez

    2014-11-01

    Full Text Available The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  18. An SVM-based classifier for estimating the state of various rotating components in agro-industrial machinery with a vibration signal acquired from a single point on the machine chassis.

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  19. An accurate DNA marker assay for stem rust resistance gene Sr2 in wheat

    The stem rust resistance gene Sr2 has provided broad-spectrum protection against stem rust (Puccinia graminis) since its wide spread deployment in wheat from the 1940s. Because Sr2 confers partial resistance which is difficult to select under field conditions, a DNA marker is desirable that accurate...

  20. Hybrid PSO–SVM-based method for forecasting of the remaining useful life for aircraft engines and evaluation of its reliability

    The present paper describes a hybrid PSO–SVM-based model for the prediction of the remaining useful life of aircraft engines. The proposed hybrid model combines support vector machines (SVMs), which have been successfully adopted for regression problems, with the particle swarm optimization (PSO) technique. This optimization technique involves kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not been yet widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid PSO–SVM-based model from the remaining measured parameters (input variables) for aircraft engines with success. A coefficient of determination equal to 0.9034 was obtained when this hybrid PSO–RBF–SVM-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. One of the main advantages of this predictive model is that it does not require information about the previous operation states of the engine. Finally, the main conclusions of this study are exposed. - Highlights: • A hybrid PSO–SVM-based model is built as a predictive model of the RUL values for aircraft engines. • The remaining physical–chemical variables in this process are studied in depth. • The obtained regression accuracy of our method is about 95%. • The results show that PSO–SVM-based model can assist in the diagnosis of the RUL values with accuracy

  1. Coding exon-structure aware realigner (CESAR) utilizes genome alignments for accurate comparative gene annotation.

    Sharma, Virag; Elghafari, Anas; Hiller, Michael

    2016-06-20

    Identifying coding genes is an essential step in genome annotation. Here, we utilize existing whole genome alignments to detect conserved coding exons and then map gene annotations from one genome to many aligned genomes. We show that genome alignments contain thousands of spurious frameshifts and splice site mutations in exons that are truly conserved. To overcome these limitations, we have developed CESAR (Coding Exon-Structure Aware Realigner) that realigns coding exons, while considering reading frame and splice sites of each exon. CESAR effectively avoids spurious frameshifts in conserved genes and detects 91% of shifted splice sites. This results in the identification of thousands of additional conserved exons and 99% of the exons that lack inactivating mutations match real exons. Finally, to demonstrate the potential of using CESAR for comparative gene annotation, we applied it to 188 788 exons of 19 865 human genes to annotate human genes in 99 other vertebrates. These comparative gene annotations are available as a resource (http://bds.mpi-cbg.de/hillerlab/CESAR/). CESAR (https://github.com/hillerlab/CESAR/) can readily be applied to other alignments to accurately annotate coding genes in many other vertebrate and invertebrate genomes. PMID:27016733

  2. In-Vivo Imaging of Cell Migration Using Contrast Enhanced MRI and SVM Based Post-Processing.

    Christian Weis

    Full Text Available The migration of cells within a living organism can be observed with magnetic resonance imaging (MRI in combination with iron oxide nanoparticles as an intracellular contrast agent. This method, however, suffers from low sensitivity and specificty. Here, we developed a quantitative non-invasive in-vivo cell localization method using contrast enhanced multiparametric MRI and support vector machines (SVM based post-processing. Imaging phantoms consisting of agarose with compartments containing different concentrations of cancer cells labeled with iron oxide nanoparticles were used to train and evaluate the SVM for cell localization. From the magnitude and phase data acquired with a series of T2*-weighted gradient-echo scans at different echo-times, we extracted features that are characteristic for the presence of superparamagnetic nanoparticles, in particular hyper- and hypointensities, relaxation rates, short-range phase perturbations, and perturbation dynamics. High detection quality was achieved by SVM analysis of the multiparametric feature-space. The in-vivo applicability was validated in animal studies. The SVM detected the presence of iron oxide nanoparticles in the imaging phantoms with high specificity and sensitivity with a detection limit of 30 labeled cells per mm3, corresponding to 19 μM of iron oxide. As proof-of-concept, we applied the method to follow the migration of labeled cancer cells injected in rats. The combination of iron oxide labeled cells, multiparametric MRI and a SVM based post processing provides high spatial resolution, specificity, and sensitivity, and is therefore suitable for non-invasive in-vivo cell detection and cell migration studies over prolonged time periods.

  3. svmPRAT: SVM-based Protein Residue Annotation Toolkit

    Kauffman Christopher; Rangwala Huzefa; Karypis George

    2009-01-01

    Abstract Background Over the last decade several prediction methods have been developed for determining the structural and functional properties of individual protein residues using sequence and sequence-derived information. Most of these methods are based on support vector machines as they provide accurate and generalizable prediction models. Results We present a general purpose protein residue annotation toolkit (svmPRAT) to allow biologists to formulate residue-wise prediction problems. sv...

  4. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    XueYan Li

    Full Text Available Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes.

  5. Gene expression signatures of radiation response are specific, durable and accurate in mice and humans.

    Sarah K Meadows

    Full Text Available BACKGROUND: Previous work has demonstrated the potential for peripheral blood (PB gene expression profiling for the detection of disease or environmental exposures. METHODS AND FINDINGS: We have sought to determine the impact of several variables on the PB gene expression profile of an environmental exposure, ionizing radiation, and to determine the specificity of the PB signature of radiation versus other genotoxic stresses. Neither genotype differences nor the time of PB sampling caused any lessening of the accuracy of PB signatures to predict radiation exposure, but sex difference did influence the accuracy of the prediction of radiation exposure at the lowest level (50 cGy. A PB signature of sepsis was also generated and both the PB signature of radiation and the PB signature of sepsis were found to be 100% specific at distinguishing irradiated from septic animals. We also identified human PB signatures of radiation exposure and chemotherapy treatment which distinguished irradiated patients and chemotherapy-treated individuals within a heterogeneous population with accuracies of 90% and 81%, respectively. CONCLUSIONS: We conclude that PB gene expression profiles can be identified in mice and humans that are accurate in predicting medical conditions, are specific to each condition and remain highly accurate over time.

  6. A PSO-SVM-based 24 Hours Power Load Forecasting Model

    Yu Xiaoxu

    2015-01-01

    Full Text Available In order to improve the drawbacks of over-fitting and easily get stuck into local extremes of BACK propagation Neural Network, a new method of combination of wavelet transform and PSO-SVM (Particle Swarm Optimization- Support Vector Machine power load forecasting model is proposed. By employing wave-let transform, the authors decompose the time sequences of power load into high-frequency and low-frequency parts, namely the low-frequency part forecast with this model and the high-frequency part forecast with weighted average method. With PSO, which is a heuristic bionic optimization algorithm, the authors figure out the prefer-able parameters of SVM, and the model proposed in this paper is tested to be more accurately to forecast the 24h power load than BP model.

  7. A Statistical Parameter Analysis and SVM Based Fault Diagnosis Strategy for Dynamically Tuned Gyroscopes

    2007-01-01

    Gyro's fault diagnosis plays a critical role in inertia navigation systems for higher reliability and precision. A new fault diagnosis strategy based on the statistical parameter analysis (SPA) and support vector machine(SVM) classification model was proposed for dynamically tuned gyroscopes (DTG). The SPA, a kind of time domain analysis approach, was introduced to compute a set of statistical parameters of vibration signal as the state features of DTG, with which the SVM model, a novel learning machine based on statistical learning theory (SLT), was applied and constructed to train and identify the working state of DTG. The experimental results verify that the proposed diagnostic strategy can simply and effectively extract the state features of DTG, and it outperforms the radial-basis function (RBF) neural network based diagnostic method and can more reliably and accurately diagnose the working state of DTG.

  8. Optimal training dataset composition for SVM-based, age-independent, automated epileptic seizure detection.

    Bogaarts, J G; Gommer, E D; Hilkman, D M W; van Kranen-Mastenbroek, V H J M; Reulen, J P H

    2016-08-01

    -independent seizure detection is possible by training one classifier on EEG data from both neonatal and adult patients. Furthermore, our results indicate that for accurate age-independent seizure detection, it is important that EEG data from each age category are used for classifier training. This is particularly important for neonatal seizure detection. Our results underline the under-appreciated importance of training dataset composition with respect to accurate age-independent seizure detection. PMID:27032931

  9. l2 Multiple Kernel Fuzzy SVM-Based Data Fusion for Improving Peptide Identification.

    Jian, Ling; Xia, Zhonghang; Niu, Xinnan; Liang, Xijun; Samir, Parimal; Link, Andrew J

    2016-01-01

    SEQUEST is a database-searching engine, which calculates the correlation score between observed spectrum and theoretical spectrum deduced from protein sequences stored in a flat text file, even though it is not a relational and object-oriental repository. Nevertheless, the SEQUEST score functions fail to discriminate between true and false PSMs accurately. Some approaches, such as PeptideProphet and Percolator, have been proposed to address the task of distinguishing true and false PSMs. However, most of these methods employ time-consuming learning algorithms to validate peptide assignments [1] . In this paper, we propose a fast algorithm for validating peptide identification by incorporating heterogeneous information from SEQUEST scores and peptide digested knowledge. To automate the peptide identification process and incorporate additional information, we employ l2 multiple kernel learning (MKL) to implement the current peptide identification task. Results on experimental datasets indicate that compared with state-of-the-art methods, i.e., PeptideProphet and Percolator, our data fusing strategy has comparable performance but reduces the running time significantly. PMID:26394437

  10. Online Adaptive Error Compensation SVM-Based Sliding Mode Control of an Unmanned Aerial Vehicle

    Kaijia Xue

    2016-01-01

    Full Text Available Unmanned Aerial Vehicle (UAV is a nonlinear dynamic system with uncertainties and noises. Therefore, an appropriate control system has an obligation to ensure the stabilization and navigation of UAV. This paper mainly discusses the control problem of quad-rotor UAV system, which is influenced by unknown parameters and noises. Besides, a sliding mode control based on online adaptive error compensation support vector machine (SVM is proposed for stabilizing quad-rotor UAV system. Sliding mode controller is established through analyzing quad-rotor dynamics model in which the unknown parameters are computed by offline SVM. During this process, the online adaptive error compensation SVM method is applied in this paper. As modeling errors and noises both exist in the process of flight, the offline SVM one-time mode cannot predict the uncertainties and noises accurately. The control law is adjusted in real-time by introducing new training sample data to online adaptive SVM in the control process, so that the stability and robustness of flight are ensured. It can be demonstrated through the simulation experiments that the UAV that joined online adaptive SVM can track the changing path faster according to its dynamic model. Consequently, the proposed method that is proved has the better control effect in the UAV system.

  11. A Genetic Algorithm Optimized Decision Tree-SVM based Stock Market Trend Prediction System

    Binoy B. Nair

    2010-12-01

    Full Text Available Prediction of stock market trends has been an area of great interest both to researchers attempting to uncover the information hidden in the stock market data and for those who wish to profit by trading stocks. The extremely nonlinear nature of the stock market data makes it very difficult to design a system that can predict the future direction of the stock market with sufficient accuracy. This work presents a data mining based stock market trend prediction system, which produces highly accurate stock market forecasts. The proposed system is a genetic algorithm optimized decision tree-support vector machine (SVM hybrid, which can predict one-day-ahead trends in stockmarkets. The uniqueness of the proposed system lies in the use ofthe hybrid system which can adapt itself to the changing market conditions and in the fact that while most of the attempts at stockmarket trend prediction have approached it as a regression problem, present study converts the trend prediction task into a classification problem, thus improving the prediction accuracysignificantly. Performance of the proposed hybrid system isvalidated on the historical time series data from the Bombaystock exchange sensitive index (BSE-Sensex. The system performance is then compared to that of an artificial neural network (ANN based system and a naïve Bayes based system. It is found that the trend prediction accuracy is highest for the hybrid system and the genetic algorithm optimized decision tree- SVM hybrid system outperforms both the artificial neural network and the naïve bayes based trend prediction systems.

  12. LS-SVM Based AGC of an Asynchronous Power System with Dynamic Participation from DFIG Based Wind Turbines

    Gulshan Sharma

    2014-08-01

    Full Text Available Modern power systems are large and interconnected with growing trends to integrate wind energy to the power system and meet the ever rising energy demand in an economical manner. The penetration of wind energy has motivated power engineers and researchers to investigate the dynamic participation of Doubly Fed Induction Generators (DFIG based wind turbines in Automatic Generation Control (AGC services. However, with dynamic participation of DFIG, the AGC problem becomes more complex and under these conditions classical AGC are not suitable. Therefore, a new non-linear Least Squares Support Vector Machines (LS-SVM based regulator for solution of AGC problem is proposed in this study. The proposed AGC regulator is trained for a wide range of operating conditions and load changes using an off-line data set generated from the robust control technique. A two-area power system connected via parallel AC/DC tie-lines with DFIG based wind turbines in each area is considered to demonstrate the effectiveness of the proposed AGC regulator and compared with results obtained using Multi-Layer Perceptron (MLP neural networks and conventional PI regulators under various operating conditions and load changes.

  13. Gene expression anti-profiles as a basis for accurate universal cancer signatures

    Corrada Bravo Héctor

    2012-10-01

    Full Text Available Abstract Background Early screening for cancer is arguably one of the greatest public health advances over the last fifty years. However, many cancer screening tests are invasive (digital rectal exams, expensive (mammograms, imaging or both (colonoscopies. This has spurred growing interest in developing genomic signatures that can be used for cancer diagnosis and prognosis. However, progress has been slowed by heterogeneity in cancer profiles and the lack of effective computational prediction tools for this type of data. Results We developed anti-profiles as a first step towards translating experimental findings suggesting that stochastic across-sample hyper-variability in the expression of specific genes is a stable and general property of cancer into predictive and diagnostic signatures. Using single-chip microarray normalization and quality assessment methods, we developed an anti-profile for colon cancer in tissue biopsy samples. To demonstrate the translational potential of our findings, we applied the signature developed in the tissue samples, without any further retraining or normalization, to screen patients for colon cancer based on genomic measurements from peripheral blood in an independent study (AUC of 0.89. This method achieved higher accuracy than the signature underlying commercially available peripheral blood screening tests for colon cancer (AUC of 0.81. We also confirmed the existence of hyper-variable genes across a range of cancer types and found that a significant proportion of tissue-specific genes are hyper-variable in cancer. Based on these observations, we developed a universal cancer anti-profile that accurately distinguishes cancer from normal regardless of tissue type (ten-fold cross-validation AUC > 0.92. Conclusions We have introduced anti-profiles as a new approach for developing cancer genomic signatures that specifically takes advantage of gene expression heterogeneity. We have demonstrated that anti

  14. Importance of Housekeeping gene selection for accurate RT-qPCR in a wound healing model

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A.

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for RT-qPCR analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expressio...

  15. A simple and accurate two-step long DNA sequences synthesis strategy to improve heterologous gene expression in pichia.

    Jiang-Ke Yang

    Full Text Available In vitro gene chemical synthesis is a powerful tool to improve the expression of gene in heterologous system. In this study, a two-step gene synthesis strategy that combines an assembly PCR and an overlap extension PCR (AOE was developed. In this strategy, the chemically synthesized oligonucleotides were assembled into several 200-500 bp fragments with 20-25 bp overlap at each end by assembly PCR, and then an overlap extension PCR was conducted to assemble all these fragments into a full length DNA sequence. Using this method, we de novo designed and optimized the codon of Rhizopus oryzae lipase gene ROL (810 bp and Aspergillus niger phytase gene phyA (1404 bp. Compared with the original ROL gene and phyA gene, the codon-optimized genes expressed at a significantly higher level in yeasts after methanol induction. We believe this AOE method to be of special interest as it is simple, accurate and has no limitation with respect to the size of the gene to be synthesized. Combined with de novo design, this method allows the rapid synthesis of a gene optimized for expression in the system of choice and production of sufficient biological material for molecular characterization and biotechnological application.

  16. 基于支持向量机无限集成学习方法的遥感图像分类%Remotely sensed imagery classification by SVM-based Infinite Ensemble Learning method

    杨娜; 秦志远; 张俊

    2013-01-01

    基于支持向量机的无限集成学习方法(SVM-based IEL)是机器学习领域新兴起的一种集成学习方法.本文将SVM-based IEL引入遥感图像的分类领域,并同时将SVM、Bagging、AdaBoost和SVM-based IEL等方法应用于遥感图像分类.实验表明:Bagging方法可以提高遥感图像的分类精度,而AdaBoost却降低了遥感图像的分类精度;同时,与SVM、有限集成的学习方法相比,SVM-based IEL方法具有可以显著地提高遥感图像的分类精度、分类效率的优势.%Support-vector-machines-based Infinite Ensemble Learning method ( SVM-based IEL) is one of the ensemble learning methods in the field of machine learning. In this paper, the SVM-based IEL was applied to the classification of remotely sensed imagery besides classic ensemble learning methods such as Bagging, AdaBoost and SVM etc. SVM was taken as the base classifier in Bagging, AdaBoost The experiments showed that the classic ensemble learning methods have different performances compared to SVM. In detail , the Bagging was capable of enhancing the classification accuracy but the AdaBoost was decreasing the classification accuracy. Furthermore, the experiments suggested that compared to SVM and classic ensemble learning methods, SVM-based IEL has many merits such as increasing both of the classification accuracy and classification efficiency.

  17. Normalization with genes encoding ribosomal proteins but not GAPDH provides an accurate quantification of gene expressions in neuronal differentiation of PC12 cells

    Lim Qing-En

    2010-01-01

    Full Text Available Abstract Background Gene regulation at transcript level can provide a good indication of the complex signaling mechanisms underlying physiological and pathological processes. Transcriptomic methods such as microarray and quantitative real-time PCR require stable reference genes for accurate normalization of gene expression. Some but not all studies have shown that housekeeping genes (HGKs, β-actin (ACTB and glyceraldehyde-3-phosphate dehydrogenase (GAPDH, which are routinely used for normalization, may vary significantly depending on the cell/tissue type and experimental conditions. It is currently unclear if these genes are stably expressed in cells undergoing drastic morphological changes during neuronal differentiation. Recent meta-analysis of microarray datasets showed that some but not all of the ribosomal protein genes are stably expressed. To test the hypothesis that some ribosomal protein genes can serve as reference genes for neuronal differentiation, a genome-wide analysis was performed and putative reference genes were identified based on stability of expressions. The stabilities of these potential reference genes were then analyzed by reverse transcription quantitative real-time PCR in six differentiation conditions. Results Twenty stably expressed genes, including thirteen ribosomal protein genes, were selected from microarray analysis of the gene expression profiles of GDNF and NGF induced differentiation of PC12 cells. The expression levels of these candidate genes as well as ACTB and GAPDH were further analyzed by reverse transcription quantitative real-time PCR in PC12 cells differentiated with a variety of stimuli including NGF, GDNF, Forskolin, KCl and ROCK inhibitor, Y27632. The performances of these candidate genes as stable reference genes were evaluated with two independent statistical approaches, geNorm and NormFinder. Conclusions The ribosomal protein genes, RPL19 and RPL29, were identified as suitable reference genes

  18. Comprehensive and accurate mutation scanning of the CFTR gene by two-dimensional DNA electrophoresis

    Wu, Y; Hofstra, RMW; Scheffer, H; Uitterlinden, AG; Mullaart, E; Buys, CHCM; Vijg, J

    1996-01-01

    The large number of possible disease causing mutations in the 27 exons of the cystic fibrosis transmembrane conductance regulator (CFTR) gene has severely limited direct diagnosis of cystic fibrosis (CF) patients and carriers by mutation detection. Here we show that in principle testing for mutation

  19. Robust TLR4-induced gene expression patterns are not an accurate indicator of human immunity

    Davidson Donald J

    2010-01-01

    Full Text Available Abstract Background Activation of Toll-like receptors (TLRs is widely accepted as an essential event for defence against infection. Many TLRs utilize a common signalling pathway that relies on activation of the kinase IRAK4 and the transcription factor NFκB for the rapid expression of immunity genes. Methods 21 K DNA microarray technology was used to evaluate LPS-induced (TLR4 gene responses in blood monocytes from a child with an IRAK4-deficiency. In vitro responsiveness to LPS was confirmed by real-time PCR and ELISA and compared to the clinical predisposition of the child and IRAK4-deficient mice to Gram negative infection. Results We demonstrated that the vast majority of LPS-responsive genes in IRAK4-deficient monocytes were greatly suppressed, an observation that is consistent with the described role for IRAK4 as an essential component of TLR4 signalling. The severely impaired response to LPS, however, is inconsistent with a remarkably low incidence of Gram negative infections observed in this child and other children with IRAK4-deficiency. This unpredicted clinical phenotype was validated by demonstrating that IRAK4-deficient mice had a similar resistance to infection with Gram negative S. typhimurium as wildtype mice. A number of immunity genes, such as chemokines, were expressed at normal levels in human IRAK4-deficient monocytes, indicating that particular IRAK4-independent elements within the repertoire of TLR4-induced responses are expressed. Conclusions Sufficient defence to Gram negative immunity does not require IRAK4 or a robust, 'classic' inflammatory and immune response.

  20. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts. PMID:25445290

  1. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    Alves-Ferreira Marcio; Grossi-de-Sa Maria; Brilhante Osmundo; Nardeli Sarah M; Artico Sinara

    2010-01-01

    Abstract Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on i...

  2. Selection of suitable reference genes for accurate normalization of gene expression profile studies in non-small cell lung cancer

    In real-time RT quantitative PCR (qPCR) the accuracy of normalized data is highly dependent on the reliability of the reference genes (RGs). Failure to use an appropriate control gene for normalization of qPCR data may result in biased gene expression profiles, as well as low precision, so that only gross changes in expression level are declared statistically significant or patterns of expression are erroneously characterized. Therefore, it is essential to determine whether potential RGs are appropriate for specific experimental purposes. Aim of this study was to identify and validate RGs for use in the differentiation of normal and tumor lung expression profiles. A meta-analysis of lung cancer transcription profiles generated with the GeneChip technology was used to identify five putative RGs. Their consistency and that of seven commonly used RGs was tested by using Taqman probes on 18 paired normal-tumor lung snap-frozen specimens obtained from non-small-cell lung cancer (NSCLC) patients during primary curative resection. The 12 RGs displayed showed a wide range of Ct values: except for rRNA18S (mean 9.8), the mean values of all the commercial RGs and ESD ranged from 19 to 26, whereas those of the microarray-selected RGs (BTF-3, YAP1, HIST1H2BC, RPL30) exceeded 26. RG expression stability within sample populations and under the experimental conditions (tumour versus normal lung specimens) was evaluated by: (1) descriptive statistic; (2) equivalence test; (3) GeNorm applet. All these approaches indicated that the most stable RGs were POLR2A, rRNA18S, YAP1 and ESD. These data suggest that POLR2A, rRNA18S, YAP1 and ESD are the most suitable RGs for gene expression profile studies in NSCLC. Furthermore, they highlight the limitations of commercial RGs and indicate that meta-data analysis of genome-wide transcription profiling studies may identify new RGs

  3. Selection of suitable reference genes for accurate normalization of gene expression profile studies in non-small cell lung cancer

    Novello Silvia

    2006-07-01

    Full Text Available Abstract Background In real-time RT quantitative PCR (qPCR the accuracy of normalized data is highly dependent on the reliability of the reference genes (RGs. Failure to use an appropriate control gene for normalization of qPCR data may result in biased gene expression profiles, as well as low precision, so that only gross changes in expression level are declared statistically significant or patterns of expression are erroneously characterized. Therefore, it is essential to determine whether potential RGs are appropriate for specific experimental purposes. Aim of this study was to identify and validate RGs for use in the differentiation of normal and tumor lung expression profiles. Methods A meta-analysis of lung cancer transcription profiles generated with the GeneChip technology was used to identify five putative RGs. Their consistency and that of seven commonly used RGs was tested by using Taqman probes on 18 paired normal-tumor lung snap-frozen specimens obtained from non-small-cell lung cancer (NSCLC patients during primary curative resection. Results The 12 RGs displayed showed a wide range of Ct values: except for rRNA18S (mean 9.8, the mean values of all the commercial RGs and ESD ranged from 19 to 26, whereas those of the microarray-selected RGs (BTF-3, YAP1, HIST1H2BC, RPL30 exceeded 26. RG expression stability within sample populations and under the experimental conditions (tumour versus normal lung specimens was evaluated by: (1 descriptive statistic; (2 equivalence test; (3 GeNorm applet. All these approaches indicated that the most stable RGs were POLR2A, rRNA18S, YAP1 and ESD. Conclusion These data suggest that POLR2A, rRNA18S, YAP1 and ESD are the most suitable RGs for gene expression profile studies in NSCLC. Furthermore, they highlight the limitations of commercial RGs and indicate that meta-data analysis of genome-wide transcription profiling studies may identify new RGs.

  4. VICMpred: An SVM-based Method for the Prediction of Functional Proteins of Gram-negative Bacteria Using Amino Acid Patterns and Composition

    Sudipto Saha; G.P.S. Raghava

    2006-01-01

    In this study, an attempt has been made to predict the major functions of gramnegative bacterial proteins from their amino acid sequences. The dataset used for training and testing consists of 670 non-redundant gram-negative bacterial proteins (255 ofcellular process, 60 of information molecules, 285 of metabolism, and 70 of virulence factors). First we developed an SVM-based method using amino acid and dipeptide composition and achieved the overall accuracy of 52.39% and 47.01%, respectively. We introduced a new concept for the classification of proteins based on tetrapeptides, in which we identified the unique tetrapeptides significantly found in a class of proteins. These tetrapeptides were used as the input feature for predicting the function of a protein and achieved the overall accuracy of 68.66%. We also developed a hybrid method in which the tetrapeptide information was used with amino acid composition and achieved the overall accuracy of 70.75%. A five-fold cross validation was used to evaluate the performance of these methods. The web server VICMpred has been developed for predicting the function of gram-negative bacterial proteins (http://www.imtech.res.in/raghava/vicmpred/).

  5. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    Ruben Ruiz-Gonzalez; Jaime Gomez-Gil; Francisco Javier Gomez-Gil; Víctor Martínez-Martínez

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two m...

  6. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction.

    Motomura, Kaori; Inoue, Kimiko; Ogura, Atsuo

    2016-06-17

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  7. Accurate breast cancer diagnosis through real-time PCR her-2 gene quantification using immunohistochemically-identified biopsies

    MENDOZA, GRETEL; PORTILLO, AMELIA; Olmos-Soto, Jorge

    2012-01-01

    her-2 gene amplification and its overexpression in breast cancer cells is directly associated with aggressive clinical behavior. The her-2 gene and its Her-2 protein have been utilized for disease diagnosis and as a predictive marker for treatment response to the antibody herceptin. Fluorescent in situ hybridization (FISH) and immunohistochemistry (IHC) are the most common FDA-approved methodologies involving gene and protein quantification, respectively. False positive or negative her-2/Her-...

  8. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    Alves-Ferreira Marcio

    2010-03-01

    Full Text Available Abstract Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR. Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references

  9. Identification of a 251 gene expression signature that can accurately detect M. tuberculosis in patients with and without HIV co-infection.

    Noor Dawany

    Full Text Available BACKGROUND: Co-infection with tuberculosis (TB is the leading cause of death in HIV-infected individuals. However, diagnosis of TB, especially in the presence of an HIV co-infection, can be limiting due to the high inaccuracy associated with the use of conventional diagnostic methods. Here we report a gene signature that can identify a tuberculosis infection in patients co-infected with HIV as well as in the absence of HIV. METHODS: We analyzed global gene expression data from peripheral blood mononuclear cell (PBMC samples of patients that were either mono-infected with HIV or co-infected with HIV/TB and used support vector machines to identify a gene signature that can distinguish between the two classes. We then validated our results using publically available gene expression data from patients mono-infected with TB. RESULTS: Our analysis successfully identified a 251-gene signature that accurately distinguishes patients co-infected with HIV/TB from those infected with HIV only, with an overall accuracy of 81.4% (sensitivity = 76.2%, specificity = 86.4%. Furthermore, we show that our 251-gene signature can also accurately distinguish patients with active TB in the absence of an HIV infection from both patients with a latent TB infection and healthy controls (88.9-94.7% accuracy; 69.2-90% sensitivity and 90.3-100% specificity. We also demonstrate that the expression levels of the 251-gene signature diminish as a correlate of the length of TB treatment. CONCLUSIONS: A 251-gene signature is described to (a detect TB in the presence or absence of an HIV co-infection, and (b assess response to treatment following anti-TB therapy.

  10. A scalable and accurate targeted gene assembly tool (SAT-Assembler for next-generation sequencing data.

    Yuan Zhang

    2014-08-01

    Full Text Available Gene assembly, which recovers gene segments from short reads, is an important step in functional analysis of next-generation sequencing data. Lacking quality reference genomes, de novo assembly is commonly used for RNA-Seq data of non-model organisms and metagenomic data. However, heterogeneous sequence coverage caused by heterogeneous expression or species abundance, similarity between isoforms or homologous genes, and large data size all pose challenges to de novo assembly. As a result, existing assembly tools tend to output fragmented contigs or chimeric contigs, or have high memory footprint. In this work, we introduce a targeted gene assembly program SAT-Assembler, which aims to recover gene families of particular interest to biologists. It addresses the above challenges by conducting family-specific homology search, homology-guided overlap graph construction, and careful graph traversal. It can be applied to both RNA-Seq and metagenomic data. Our experimental results on an Arabidopsis RNA-Seq data set and two metagenomic data sets show that SAT-Assembler has smaller memory usage, comparable or better gene coverage, and lower chimera rate for assembling a set of genes from one or multiple pathways compared with other assembly tools. Moreover, the family-specific design and rapid homology search allow SAT-Assembler to be naturally compatible with parallel computing platforms. The source code of SAT-Assembler is available at https://sourceforge.net/projects/sat-assembler/. The data sets and experimental settings can be found in supplementary material.

  11. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.)

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2014-01-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integra...

  12. MiRPara: a SVM-based software tool for prediction of most probable microRNA coding regions in genome scale sequences

    2011-01-01

    Background MicroRNAs are a family of ~22 nt small RNAs that can regulate gene expression at the post-transcriptional level. Identification of these molecules and their targets can aid understanding of regulatory processes. Recently, HTS has become a common identification method but there are two major limitations associated with the technique. Firstly, the method has low efficiency, with typically less than 1 in 10,000 sequences representing miRNA reads and secondly the method preferentially targets highly expressed miRNAs. If sequences are available, computational methods can provide a screening step to investigate the value of an HTS study and aid interpretation of results. However, current methods can only predict miRNAs for short fragments and have usually been trained against small datasets which don't always reflect the diversity of these molecules. Results We have developed a software tool, miRPara, that predicts most probable mature miRNA coding regions from genome scale sequences in a species specific manner. We classified sequences from miRBase into animal, plant and overall categories and used a support vector machine to train three models based on an initial set of 77 parameters related to the physical properties of the pre-miRNA and its miRNAs. By applying parameter filtering we found a subset of ~25 parameters produced higher prediction ability compared to the full set. Our software achieves an accuracy of up to 80% against experimentally verified mature miRNAs, making it one of the most accurate methods available. Conclusions miRPara is an effective tool for locating miRNAs coding regions in genome sequences and can be used as a screening step prior to HTS experiments. It is available at http://www.whiov.ac.cn/bioinformatics/mirpara PMID:21504621

  13. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, M Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit. PMID:26897117

  14. Single-Copy Green Fluorescent Protein Gene Fusions Allow Accurate Measurement of Salmonella Gene Expression In Vitro and during Infection of Mammalian Cells

    Hautefort, Isabelle; Proença, Maria José; Hinton, Jay C. D.

    2003-01-01

    We developed a reliable and flexible green fluorescent protein (GFP)-based system for measuring gene expression in individual bacterial cells. Until now, most systems have relied upon plasmid-borne gfp gene fusions, risking problems associated with plasmid instability. We show that a recently developed GFP variant, GFP+, is suitable for assessing bacterial gene expression. Various gfp+ transcriptional fusions were constructed and integrated as single copies into the chromosome of Salmonella e...

  15. ICGA-PSO-ELM approach for accurate multiclass cancer classification resulting in reduced gene sets in which genes encoding secreted proteins are highly represented.

    Saraswathi, Saras; Sundaram, Suresh; Sundararajan, Narasimhan; Zimmermann, Michael; Nilsen-Hamilton, Marit

    2011-01-01

    A combination of Integer-Coded Genetic Algorithm (ICGA) and Particle Swarm Optimization (PSO), coupled with the neural-network-based Extreme Learning Machine (ELM), is used for gene selection and cancer classification. ICGA is used with PSO-ELM to select an optimal set of genes, which is then used to build a classifier to develop an algorithm (ICGA_PSO_ELM) that can handle sparse data and sample imbalance. We evaluate the performance of ICGA-PSO-ELM and compare our results with existing methods in the literature. An investigation into the functions of the selected genes, using a systems biology approach, revealed that many of the identified genes are involved in cell signaling and proliferation. An analysis of these gene sets shows a larger representation of genes that encode secreted proteins than found in randomly selected gene sets. Secreted proteins constitute a major means by which cells interact with their surroundings. Mounting biological evidence has identified the tumor microenvironment as a critical factor that determines tumor survival and growth. Thus, the genes identified by this study that encode secreted proteins might provide important insights to the nature of the critical biological features in the microenvironment of each tumor type that allow these cells to thrive and proliferate. PMID:21233525

  16. Global cDNA Amplification Combined with Real-Time RT–PCR: Accurate Quantification of Multiple Human Potassium Channel Genes at the Single Cell Level

    Al-Taher, A.; Bashein, A.; Nolan, T.; Hollingsworth, M.; Brady, G

    2000-01-01

    We have developed a sensitive quantitative RT–PCR procedure suitable for the analysis of small samples, including single cells, and have used it to measure levels of potassium channel mRNAs in a panel of human tissues and small numbers of cells grown in culture. The method involves an initial global amplification of cDNA derived from all added polyadenylated mRNA followed by quantitative RT–PCR of individual genes using specific primers. In order to facilitate rapid and accurate processing of...

  17. Gene expression profiling of whole blood: Comparison of target preparation methods for accurate and reproducible microarray analysis

    Choi Dongseok

    2009-01-01

    Full Text Available Abstract Background Peripheral blood is an accessible and informative source of transcriptomal information for many human disease and pharmacogenomic studies. While there can be significant advantages to analyzing RNA isolated from whole blood, particularly in clinical studies, the preparation of samples for microarray analysis is complicated by the need to minimize artifacts associated with highly abundant globin RNA transcripts. The impact of globin RNA transcripts on expression profiling data can potentially be reduced by using RNA preparation and labeling methods that remove or block globin RNA during the microarray assay. We compared four different methods for preparing microarray hybridization targets from human whole blood collected in PAXGene tubes. Three of the methods utilized the Affymetrix one-cycle cDNA synthesis/in vitro transcription protocol but varied treatment of input RNA as follows: i. no treatment; ii. treatment with GLOBINclear; or iii. treatment with globin PNA oligos. In the fourth method cDNA targets were prepared with the Ovation amplification and labeling system. Results We find that microarray targets generated with labeling methods that reduce globin mRNA levels or minimize the impact of globin transcripts during hybridization detect more transcripts in the microarray assay compared with the standard Affymetrix method. Comparison of microarray results with quantitative PCR analysis of a panel of genes from the NF-kappa B pathway shows good correlation of transcript measurements produced with all four target preparation methods, although method-specific differences in overall correlation were observed. The impact of freezing blood collected in PAXGene tubes on data reproducibility was also examined. Expression profiles show little or no difference when RNA is extracted from either fresh or frozen blood samples. Conclusion RNA preparation and labeling methods designed to reduce the impact of globin mRNA transcripts can

  18. Accurate Profiling of Gene Expression and Alternative Polyadenylation with Whole Transcriptome Termini Site Sequencing (WTTS-Seq).

    Zhou, Xiang; Li, Rui; Michal, Jennifer J; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R; Pouchnik, Derek J; Harland, Richard M; Jiang, Zhihua

    2016-06-01

    Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5'- and 3'-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915

  19. Accurate Profiling of Gene Expression and Alternative Polyadenylation with Whole Transcriptome Termini Site Sequencing (WTTS-Seq)

    Zhou, Xiang; Li, Rui; Michal, Jennifer J.; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R.; Pouchnik, Derek J.; Harland, Richard M.; Jiang, Zhihua

    2016-01-01

    Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5′- and 3′-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915

  20. Accurate binning of metagenomic contigs via automated clustering sequences using information of genomic signatures and marker genes.

    Lin, Hsin-Hung; Liao, Yu-Chieh

    2016-01-01

    Metagenomics, the application of shotgun sequencing, facilitates the reconstruction of the genomes of individual species from natural environments. A major challenge in the genome recovery domain is to agglomerate or 'bin' sequences assembled from metagenomic reads into individual groups. Metagenomic binning without consideration of reference sequences enables the comprehensive discovery of new microbial organisms and aids in the microbial genome reconstruction process. Here we present MyCC, an automated binning tool that combines genomic signatures, marker genes and optional contig coverages within one or multiple samples, in order to visualize the metagenomes and to identify the reconstructed genomic fragments. We demonstrate the superior performance of MyCC compared to other binning tools including CONCOCT, GroopM, MaxBin and MetaBAT on both synthetic and real human gut communities with a small sample size (one to 11 samples), as well as on a large metagenome dataset (over 250 samples). Moreover, we demonstrate the visualization of metagenomes in MyCC to aid in the reconstruction of genomes from distinct bins. MyCC is freely available at http://sourceforge.net/projects/sb2nhri/files/MyCC/. PMID:27067514

  1. 一种基于 QBC 的 SVM 主动学习算法%Active learning algorithm for SVM based on QBC

    徐海龙; 别晓峰; 冯卉; 吴天爱

    2015-01-01

    To the problem that large-scale labeled samples is not easy to acquire and the class-unbalanced dataset in the course of souport vector machine (SVM)training,an active learning algorithm based on query by committee (QBC)for SVM(QBC-ASVM)is proposed,which efficiently combines the improved QBC active learning and the weighted SVM.In this method,QBC active learning is used to select the samples which are the most valuable to the current SVM classifier,and the weighted SVM is used to reduce the impact of the unba-lanced data set on SVMs active learning.The experimental results show that the proposed approach can consid-erably reduce the labeled samples and costs compared with the passive SVM,and at the same time,it can ensure that the accurate classification performance is kept as the passive SVM,and the proposed method improves gen-eralization performance and also expedites the SVM training.%针对支持向量机(souport vector machine,SVM)训练学习过程中样本分布不均衡、难以获得大量带有类标注样本的问题,提出一种基于委员会投票选择(query by committee,QBC)的 SVM 主动学习算法 QBC-AS-VM,将改进的 QBC 主动学习方法与加权 SVM 方法有机地结合应用于 SVM 训练学习中,通过改进的 QBC 主动学习,主动选择那些对当前 SVM 分类器最有价值的样本进行标注,在 SVM 主动学习中应用改进的加权 SVM,减少了样本分布不均衡对 SVM 主动学习性能的影响,实验结果表明在保证不影响分类精度的情况下,所提出的算法需要标记的样本数量大大少于随机采样法需要标记的样本数量,降低了学习的样本标记代价,提高了 SVM 泛化性能而且训练速度同样有所提高。

  2. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening.

    Aurelien Amiot

    Full Text Available The clinical benefit of guaiac fecal occult blood tests (FOBT is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC.We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007 for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve and Net Reclassification Improvement index.There were 247 individuals (mean age 60.8±12.4 years, 52% of males in the study group, and 90 (36% of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one adjustment, Wif-1 methylation (S or U and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02 and 0.81 [0.74-0.86] (p = 0.49, respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively.The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be used in those individuals reluctant to

  3. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening

    Amiot, Aurelien

    2014-07-15

    Background: The clinical benefit of guaiac fecal occult blood tests (FOBT) is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC. Patients and Methods: We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007) for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U) were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin) selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve) and Net Reclassification Improvement index. Results: There were 247 individuals (mean age 60.8±12.4 years, 52% of males) in the study group, and 90 (36%) of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one) adjustment, Wif-1 methylation (S or U) and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02) and 0.81 [0.74-0.86] (p = 0.49), respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively). Conclusion: The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be

  4. The Rice Enhancer of Zeste [E(z)] Genes SDG711 and SDG718 are respectively involved in Long Day and Short Day Signaling to Mediate the Accurate Photoperiod Control of Flowering time

    Xiaoyun eLiu; Chao eZhou; Yu eZhao; Shaoli eZhou; Wentao eWang; Dao-Xiu eZhou

    2014-01-01

    Recent advances in rice flowering studies have shown that the accurate control of flowering by photoperiod is regulated by key mechanisms that involve the regulation of flowering genes including Hd1, Ehd1, Hd3a, and RFT1. The chromatin mechanism involved in the regulation of rice flowering genes is presently not well known. Here we show that the rice E(z) genes SDG711 and SDG718, which encode the Polycomb Repressive Complex2 (PRC2) key subunit that is required for trimethylation of histone H3...

  5. The rice enhancer of zeste [E(z)] genes SDG711 and SDG718 are respectively involved in long day and short day signaling to mediate the accurate photoperiod control of flowering time

    Liu, Xiaoyun; Zhou, Chao; Zhao, Yu; Zhou, Shaoli; Wang, Wentao; Zhou, Dao-Xiu

    2014-01-01

    Recent advances in rice flowering studies have shown that the accurate control of flowering by photoperiod is regulated by key mechanisms that involve the regulation of flowering genes including Heading date1 (Hd1), Early hd1 (Ehd1), Hd3a, and RFT1. The chromatin mechanism involved in the regulation of rice flowering genes is presently not well known. Here we show that the rice enhancer of zeste [E(z)] genes SDG711 and SDG718, which encode the polycomb repressive complex2 (PRC2) key subunit t...

  6. The effect of multiple primer-template mismatches on quantitative PCR accuracy and development of a multi-primer set assay for accurate quantification of pcrA gene sequence variants.

    Ledeker, Brett M; De Long, Susan K

    2013-09-01

    Quantitative PCR (qPCR) is a critical tool for quantifying the abundance of specific organisms and the level or expression of target genes in medically and environmentally relevant systems. However, often the power of this tool has been limited because primer-template mismatches, due to sequence variations of targeted genes, can lead to inaccuracies in measured gene quantities, detection failures, and spurious conclusions. Currently available primer design guidelines for qPCR were developed for pure culture applications, and available primer design strategies for mixed cultures were developed for detection rather than accurate quantification. Furthermore, past studies examining the impact of mismatches have focused only on single mismatches while instances of multiple mismatches are common. There are currently no appropriate solutions to overcome the challenges posed by sequence variations. Here, we report results that provide a comprehensive, quantitative understanding of the impact of multiple primer-template mismatches on qPCR accuracy and demonstrate a multi-primer set approach to accurately quantify a model gene pcrA (encoding perchlorate reductase) that has substantial sequence variation. Results showed that for multiple mismatches (up to 3 mismatches) in primer regions where mismatches were previously considered tolerable (middle and 5' end), quantification accuracies could be as low as ~0.1%. Furthermore, tests were run using a published pcrA primer set with mixtures of genomic DNA from strains known to harbor the target gene, and for some mixtures quantification accuracy was as low as ~0.8% or was non-detect. To overcome these limitations, a multiple primer set assay including minimal degeneracies was developed for pcrA genes. This assay resulted in nearly 100% accurate detection for all mixed microbial communities tested. The multi-primer set approach demonstrated herein can be broadly applied to other genes with known sequences. PMID:23806694

  7. MotorPlex provides accurate variant detection across large muscle genes both in single myopathic patients and in pools of DNA samples

    Savarese, Marco; Di Fruscio, Giuseppina; Mutarelli, Margherita; Torella, Annalaura; Magri, Francesca; Santorelli, Filippo Maria; Comi, Giacomo Pietro; Bruno, Claudio; Nigro, Vincenzo

    2014-01-01

    Mutations in ~100 genes cause muscle diseases with complex and often unexplained genotype/phenotype correlations. Next-generation sequencing studies identify a greater-than-expected number of genetic variations in the human genome. This suggests that existing clinical monogenic testing systematically miss very relevant information. We have created a core panel of genes that cause all known forms of nonsyndromic muscle disorders (MotorPlex). It comprises 93 loci, among which are the largest an...

  8. Immediate-Early Gene Transcriptional Activation in Hippocampus Ca1 and Ca3 Does Not Accurately Reflect Rapid, Pattern Completion-Based Retrieval of Context Memory

    Pevzner, Aleksandr; Guzowski, John F.

    2015-01-01

    No studies to date have examined whether immediate-early gene (IEG) activation is driven by context memory recall. To address this question, we utilized the context preexposure facilitation effect (CPFE) paradigm. In CPFE, animals acquire contextual fear conditioning through hippocampus-dependent rapid retrieval of a previously formed contextual…

  9. Genes optimized by evolution for accurate and fast translation encode in Archaea and Bacteria a broad and characteristic spectrum of protein functions

    Merkl Rainer

    2010-11-01

    Full Text Available Abstract Background In many microbial genomes, a strong preference for a small number of codons can be observed in genes whose products are needed by the cell in large quantities. This codon usage bias (CUB improves translational accuracy and speed and is one of several factors optimizing cell growth. Whereas CUB and the overrepresentation of individual proteins have been studied in detail, it is still unclear which high-level metabolic categories are subject to translational optimization in different habitats. Results In a systematic study of 388 microbial species, we have identified for each genome a specific subset of genes characterized by a marked CUB, which we named the effectome. As expected, gene products related to protein synthesis are abundant in both archaeal and bacterial effectomes. In addition, enzymes contributing to energy production and gene products involved in protein folding and stabilization are overrepresented. The comparison of genomes from eleven habitats shows that the environment has only a minor effect on the composition of the effectomes. As a paradigmatic example, we detailed the effectome content of 37 bacterial genomes that are most likely exposed to strongest selective pressure towards translational optimization. These effectomes accommodate a broad range of protein functions like enzymes related to glycolysis/gluconeogenesis and the TCA cycle, ATP synthases, aminoacyl-tRNA synthetases, chaperones, proteases that degrade misfolded proteins, protectants against oxidative damage, as well as cold shock and outer membrane proteins. Conclusions We made clear that effectomes consist of specific subsets of the proteome being involved in several cellular functions. As expected, some functions are related to cell growth and affect speed and quality of protein synthesis. Additionally, the effectomes contain enzymes of central metabolic pathways and cellular functions sustaining microbial life under stress situations. These

  10. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  11. Automatic Parameters Selection for SVM Based on PSO

    ZHANG Mingfeng; ZHU Yinghua; ZHENG Xu; LIU Yu

    2007-01-01

    Motivated by the fact that automatic parameters selection for Support Vector Machine (SVM) is an important issue to make SVM practically useful and the common used Leave-One-Out (LOO) method is complex calculation and time consuming,an effective strategy for automatic parameters selection for SVM is proposed by using the Particle Swarm Optimization (PSO) in this paper.Simulation results of practice data model demonstrate the effectiveness and high efficiency of the proposed approach.

  12. Incremental Training for SVM-Based Classification with Keyword Adjusting

    SUN Jin-wen; YANG Jian-wu; LU Bin; XIAO Jian-guo

    2004-01-01

    This paper analyzed the theory of incremental learning of SVM (support vector machine) and pointed out it is a shortage that the support vector optimization is only considered in present research of SVM incremental learning.According to the significance of keyword in training, a new incremental training method considering keyword adjusting was proposed, which eliminates the difference between incremental learning and batch learning through the keyword adjusting.The experimental results show that the improved method outperforms the method without the keyword adjusting and achieve the same precision as the batch method.

  13. Oil spill detection from SAR image using SVM based classification

    A. A. Matkan

    2013-09-01

    Full Text Available In this paper, the potential of fully polarimetric L-band SAR data for detecting sea oil spills is investigated using polarimetric decompositions and texture analysis based on SVM classifier. First, power and magnitude measurements of HH and VV polarization modes and, Pauli, Freeman and Krogager decompositions are computed and applied in SVM classifier. Texture analysis is used for identification using SVM method. The texture features i.e. Mean, Variance, Contrast and Dissimilarity from them are then extracted. Experiments are conducted on full polarimetric SAR data acquired from PALSAR sensor of ALOS satellite on August 25, 2006. An accuracy assessment indicated overall accuracy of 78.92% and 96.46% for the power measurement of the VV polarization and the Krogager decomposition respectively in first step. But by use of texture analysis the results are improved to 96.44% and 96.65% quality for mean of power and magnitude measurements of HH and VV polarizations and the Krogager decomposition. Results show that the Krogager polarimetric decomposition method has the satisfying result for detection of sea oil spill on the sea surface and the texture analysis presents the good results.

  14. Question Categorization Using SVM Based on Different Term Weighting Methods

    Priyanka G Pillai

    2012-05-01

    Full Text Available This paper deals with the performance of Question Categorization based on four different term weighting methods. Term weighting methods such as tf*idf, qf*icf, iqf*qf*icf and vrf together with SVM classifier were used for categorization. From the experiments conducted using both linear and nonlinear SVM, term weighting method iqf*qf*icf showed better performance in question categorization than other methods.

  15. Question Categorization Using SVM Based on Different Term Weighting Methods

    Priyanka G Pillai; Jayasree Narayanan

    2012-01-01

    This paper deals with the performance of Question Categorization based on four different term weighting methods. Term weighting methods such as tf*idf, qf*icf, iqf*qf*icf and vrf together with SVM classifier were used for categorization. From the experiments conducted using both linear and nonlinear SVM, term weighting method iqf*qf*icf showed better performance in question categorization than other methods.

  16. An SVM Based Approach for the Analysis Of Mammography Images

    Mammography is among the most popular imaging techniques used in the diagnosis of breast cancer. Nevertheless distinguishing between healthy and ill images is hard even for an experienced radiologist, because a single image usually includes several regions of interest (ROIs). The hardness of this classification problem along with the substantial amount of data, gathered from patients' medical history, motivates the use of a machine learning approach as part of a CAD (Computer Aided Detection) tool, aiming to assist radiologists in the characterization of mammography images. Specifically, our approach involves: i) the ROI extraction, ii) the Feature Vector extraction, iii) the Support Vector Machine (SVM) classification of ROIs and iv) the characterization of the whole image. We evaluate the performance of our approach in terms of the SVM's training and testing error and in terms of ROI specificity - sensitivity. The results show a relation between the number of features used and the SVM's performance

  17. SVM-based prediction of caspase substrate cleavage sites

    Wee, Lawrence JK; Tan, Tin Wee; Ranganathan, Shoba

    2006-01-01

    Background Caspases belong to a class of cysteine proteases which function as critical effectors in apoptosis and inflammation by cleaving substrates immediately after unique sites. Prediction of such cleavage sites will complement structural and functional studies on substrates cleavage as well as discovery of new substrates. Recently, different computational methods have been developed to predict the cleavage sites of caspase substrates with varying degrees of success. As the support vector...

  18. A NEW SVM BASED EMOTIONAL CLASSIFICATION OF IMAGE

    Wang Weining; Yu Yinglin; Zhang Jianchao

    2005-01-01

    How high-level emotional representation of art paintings can be inferred from percep tual level features suited for the particular classes (dynamic vs. static classification)is presented. The key points are feature selection and classification. According to the strong relationship between notable lines of image and human sensations, a novel feature vector WLDLV (Weighted Line Direction-Length Vector) is proposed, which includes both orientation and length information of lines in an image. Classification is performed by SVM (Support Vector Machine) and images can be classified into dynamic and static. Experimental results demonstrate the effectiveness and superiority of the algorithm.

  19. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu; Wang Xiaosheng

    2009-01-01

    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  20. Gene

    U.S. Department of Health & Human Services — Gene integrates information from a wide range of species. A record may include nomenclature, Reference Sequences (RefSeqs), maps, pathways, variations, phenotypes,...

  1. Towards accurate emergency response behavior

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  2. Accurate determination of antenna directivity

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  3. Circadian transitions in radiation dose-dependent augmentation of mRNA levels for DNA damage-induced genes elicited by accurate real-time RT-PCR quantification

    Molecular mechanisms of intracellular response after DNA-damage by exposure to ionizing radiation have been studied. In the case of cells isolated from living body of human and experimental animals, alteration of the responsiveness by physiological oscillation such as circadian rhythm must be considered. To examine the circadian variation in the response of p53-responsible genes p21, mdm2, bax, and puma, we established a method to quantitate their mRNA levels with high reproducibility and accuracy based on real-time reverse transcription polymerase chain reaction (RT-PCR) and compared the levels of responsiveness in mouse hemocytes after diurnal irradiation to that after nocturnal irradiation. Augmentations of p21 and mdm2 mRNA levels with growth-arrest and of puma mRNA before apoptosis were confirmed by time-course experiment in RAW264.7, and dose-dependent increases in the peak levels of all the RNA were shown. Similarly, the relative RNA levels of p21, mdm2, bax, and puma per glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also increased dose-dependently in peripheral blood and bone marrow cells isolated from whole-body-irradiated mice. Induction levels of all messages reduced by half after nighttime irradiation as compared with daytime irradiation in blood cells. In marrow cells, nighttime irradiation enhanced the p21 and mdm2 mRNA levels than daytime irradiation. No significant difference in bax or puma mRNA levels was observed between nighttime and daytime irradiation in marrow cells. This suggests that early-stage cellular responsiveness in DNA damage-induced genes is modulated between diurnal and nocturnal irradiation. (author)

  4. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  5. Accurate ab initio spin densities

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  6. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  7. Accurate thickness measurement of graphene

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  8. A More Accurate Fourier Transform

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  9. 38 CFR 4.46 - Accurate measurement.

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  10. Identification of Gene-Expression Signatures and Protein Markers for Breast Cancer Grading and Staging.

    Fang Yao

    Full Text Available The grade of a cancer is a measure of the cancer's malignancy level, and the stage of a cancer refers to the size and the extent that the cancer has spread. Here we present a computational method for prediction of gene signatures and blood/urine protein markers for breast cancer grades and stages based on RNA-seq data, which are retrieved from the TCGA breast cancer dataset and cover 111 pairs of disease and matching adjacent noncancerous tissues with pathologists-assigned stages and grades. By applying a differential expression and an SVM-based classification approach, we found that 324 and 227 genes in cancer have their expression levels consistently up-regulated vs. their matching controls in a grade- and stage-dependent manner, respectively. By using these genes, we predicted a 9-gene panel as a gene signature for distinguishing poorly differentiated from moderately and well differentiated breast cancers, and a 19-gene panel as a gene signature for discriminating between the moderately and well differentiated breast cancers. Similarly, a 30-gene panel and a 21-gene panel are predicted as gene signatures for distinguishing advanced stage (stages III-IV from early stage (stages I-II cancer samples and for distinguishing stage II from stage I samples, respectively. We expect these gene panels can be used as gene-expression signatures for cancer grade and stage classification. In addition, of the 324 grade-dependent genes, 188 and 66 encode proteins that are predicted to be blood-secretory and urine-excretory, respectively; and of the 227 stage-dependent genes, 123 and 51 encode proteins predicted to be blood-secretory and urine-excretory, respectively. We anticipate that some combinations of these blood and urine proteins could serve as markers for monitoring breast cancer at specific grades and stages through blood and urine tests.

  11. Laboratory Building for Accurate Determination of Plutonium

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  12. Invariant Image Watermarking Using Accurate Zernike Moments

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  13. Accurate atomic data for industrial plasma applications

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  14. More accurate picture of human body organs

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  15. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  16. A gene expression biomarker accurately predicts estrogen receptor α modulation in a human gene expression compendium

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...

  17. How Accurate is inv(A)*b?

    Druinsky, Alex

    2012-01-01

    Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.

  18. Accurate guitar tuning by cochlear implant musicians.

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  19. Accurate guitar tuning by cochlear implant musicians.

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  20. Accurate Finite Difference Methods for Option Pricing

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  1. Accurate, reproducible measurement of blood pressure.

    Campbell, N. R.; Chockalingam, A; Fodor, J. G.; McKay, D. W.

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine con...

  2. Accurate variational forms for multiskyrmion configurations

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  3. Efficient Accurate Context-Sensitive Anomaly Detection

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  4. Towards accurate modeling of moving contact lines

    Holmgren, Hanna

    2015-01-01

    The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...

  5. Accurate phase-shift velocimetry in rock

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  6. Genes and Gene Therapy

    ... correctly, a child can have a genetic disorder. Gene therapy is an experimental technique that uses genes to ... or prevent disease. The most common form of gene therapy involves inserting a normal gene to replace an ...

  7. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  8. SVM-Based CAC System for B-Mode Kidney Ultrasound Images.

    Subramanya, M B; Kumar, Vinod; Mukherjee, Shaktidev; Saini, Manju

    2015-08-01

    The present study proposes a computer-aided classification (CAC) system for three kidney classes, viz. normal, medical renal disease (MRD) and cyst using B-mode ultrasound images. Thirty-five B-mode kidney ultrasound images consisting of 11 normal images, 8 MRD images and 16 cyst images have been used. Regions of interest (ROIs) have been marked by the radiologist from the parenchyma region of the kidney in case of normal and MRD cases and from regions inside lesions for cyst cases. To evaluate the contribution of texture features extracted from de-speckled images for the classification task, original images have been pre-processed by eight de-speckling methods. Six categories of texture features are extracted. One-against-one multi-class support vector machine (SVM) classifier has been used for the present work. Based on overall classification accuracy (OCA), features from ROIs of original images are concatenated with the features from ROIs of pre-processed images. On the basis of OCA, few feature sets are considered for feature selection. Differential evolution feature selection (DEFS) has been used to select optimal features for the classification task. DEFS process is repeated 30 times to obtain 30 subsets. Run-length matrix features from ROIs of images pre-processed by Lee's sigma concatenated with that of enhanced Lee method have resulted in an average accuracy (in %) and standard deviation of 86.3 ± 1.6. The results obtained in the study indicate that the performance of the proposed CAC system is promising, and it can be used by the radiologists in routine clinical practice for the classification of renal diseases. PMID:25537457

  9. Component Content Soft-Sensor of SVM Based on Ions Color Characteristics

    Zhang Kunpeng

    2012-10-01

    Full Text Available In consideration of different characteristic colors of Ions in the P507-HCL Pr/Nd extraction separation system, ions color image feature H, S, I that closely related to the element component contents are extracted by using image processing method. Principal Component Analysis algorithm is employed to determine statistics mean of H, S, I which has the stronger correlation with element component content and the auxiliary variables are obtained. With the algorithm of support vector machine, a component contents soft-sensor model in Pr/Nd extraction process is established. Finally, simulations and tests verify the rationality and feasibility of the proposed method. The research results provide theoretical foundation for the online measurement of the component content in Pr/Nd countercurrent extraction separation process.

  10. Component Content Soft-Sensor of SVM Based on Ions Color Characteristics

    Zhang Kunpeng; Yang Hui; Lu Rongxiu

    2012-01-01

    In consideration of different characteristic colors of Ions in the P507-HCL Pr/Nd extraction separation system, ions color image feature H, S, I that closely related to the element component contents are extracted by using image processing method. Principal Component Analysis algorithm is employed to determine statistics mean of H, S, I which has the stronger correlation with element component content and the auxiliary variables are obtained. With the algorithm of support vector machine, a co...

  11. Restoring the Generalizability of SVM Based Decoding in High Dimensional Neuroimage Data

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Variance inflation is caused by a mismatch between linear projections of test and training data when projections are estimated on training sets smaller than the dimensionality of the feature space. We demonstrate that variance inflation can lead to an increased neuroimage decoding error rate for ...

  12. SVM-based feature extraction and classification of aflatoxin contaminated corn using fluorescence hyperspectral data

    Support Vector Machine (SVM) was used in the Genetic Algorithms (GA) process to select and classify a subset of hyperspectral image bands. The method was applied to fluorescence hyperspectral data for the detection of aflatoxin contamination in Aspergillus flavus infected single corn kernels. In the...

  13. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    Daleesha M Viswanathan

    2015-02-01

    Full Text Available In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL on facial expression recognition is scanty. Contrary to American Sign Language (ASL, head movements are less conspicuous in ISL and the answers to questions such as yes or no are signed by hand. Purpose of this paper is to present our work in recognizing facial expression changes in isolated ISL sentences. Facial gesture pattern results in the change of skin textures by forming wrinkles and furrows. Gabor wavelet method is well-known for capturing subtle textural changes on surfaces. Therefore, a unique approach was developed to model facial expression changes with Gabor wavelet parameters that were chosen from partitioned face areas. These parameters were incorporated with Euclidian distance measure. Multi class SVM classifier was used in this recognition system to identify facial expressions in an isolated facial expression sequences in ISL. An accuracy of 92.12 % was achieved by our proposed system.

  14. SVM-Based Spectrum Mobility Prediction Scheme in Mobile Cognitive Radio Networks

    Yao Wang; Zhongzhao Zhang; Lin Ma; Jiamei Chen

    2014-01-01

    Spectrum mobility as an essential issue has not been fully investigated in mobile cognitive radio networks (CRNs). In this paper, a novel support vector machine based spectrum mobility prediction (SVM-SMP) scheme is presented considering time-varying and space-varying characteristics simultaneously in mobile CRNs. The mobility of cognitive users (CUs) and the working activities of primary users (PUs) are analyzed in theory. And a joint feature vector extraction (JFVE) method is proposed based...

  15. A SVM-based method for sentiment analysis in Persian language

    Hajmohammadi, Mohammad Sadegh; Ibrahim, Roliana

    2013-03-01

    Persian language is the official language of Iran, Tajikistan and Afghanistan. Local online users often represent their opinions and experiences on the web with written Persian. Although the information in those reviews is valuable to potential consumers and sellers, the huge amount of web reviews make it difficult to give an unbiased evaluation to a product. In this paper, standard machine learning techniques SVM and naive Bayes are incorporated into the domain of online Persian Movie reviews to automatically classify user reviews as positive or negative and performance of these two classifiers is compared with each other in this language. The effects of feature presentations on classification performance are discussed. We find that accuracy is influenced by interaction between the classification models and the feature options. The SVM classifier achieves as well as or better accuracy than naive Bayes in Persian movie. Unigrams are proved better features than bigrams and trigrams in capturing Persian sentiment orientation.

  16. Linear SVM-Based Android Malware Detection for Reliable IoT Services

    Hyo-Sik Ham; Hwan-Hee Kim; Myung-Sup Kim; Mi-Jung Choi

    2014-01-01

    Current many Internet of Things (IoT) services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers a...

  17. SVM-based spectrum mobility prediction scheme in mobile cognitive radio networks.

    Wang, Yao; Zhang, Zhongzhao; Ma, Lin; Chen, Jiamei

    2014-01-01

    Spectrum mobility as an essential issue has not been fully investigated in mobile cognitive radio networks (CRNs). In this paper, a novel support vector machine based spectrum mobility prediction (SVM-SMP) scheme is presented considering time-varying and space-varying characteristics simultaneously in mobile CRNs. The mobility of cognitive users (CUs) and the working activities of primary users (PUs) are analyzed in theory. And a joint feature vector extraction (JFVE) method is proposed based on the theoretical analysis. Then spectrum mobility prediction is executed through the classification of SVM with a fast convergence speed. Numerical results validate that SVM-SMP gains better short-time prediction accuracy rate and miss prediction rate performance than the two algorithms just depending on the location and speed information. Additionally, a rational parameter design can remedy the prediction performance degradation caused by high speed SUs with strong randomness movements. PMID:25143975

  18. SVM-based classification of LV wall motion in cardiac MRI with the assessment of STE

    Mantilla, Juan; Garreau, Mireille; Bellanger, Jean-Jacques; Paredes, José Luis

    2015-01-01

    In this paper, we propose an automated method to classify normal/abnormal wall motion in Left Ventricle (LV) function in cardiac cine-Magnetic Resonance Imaging (MRI), taking as reference, strain information obtained from 2D Speckle Tracking Echocardiography (STE). Without the need of pre-processing and by exploiting all the images acquired during a cardiac cycle, spatio-temporal profiles are extracted from a subset of radial lines from the ventricle centroid to points outside the epicardial border. Classical Support Vector Machines (SVM) are used to classify features extracted from gray levels of the spatio-temporal profile as well as their representations in the Wavelet domain under the assumption that the data may be sparse in that domain. Based on information obtained from radial strain curves in 2D-STE studies, we label all the spatio-temporal profiles that belong to a particular segment as normal if the peak systolic radial strain curve of this segment presents normal kinesis, or abnormal if the peak systolic radial strain curve presents hypokinesis or akinesis. For this study, short-axis cine- MR images are collected from 9 patients with cardiac dyssynchrony for which we have the radial strain tracings at the mid-papilary muscle obtained by 2D STE; and from one control group formed by 9 healthy subjects. The best classification performance is obtained with the gray level information of the spatio-temporal profiles using a RBF kernel with 91.88% of accuracy, 92.75% of sensitivity and 91.52% of specificity.

  19. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  20. SVM-based posture identification with a single waist-located triaxial accelerometer

    Rodríguez Martín, Daniel Manuel; Samà Monsonís, Albert; Pérez López, Carlos; Català Mallofré, Andreu; Cabestany Moncusí, Joan; Rodríguez Molinero, Alejandro

    2013-01-01

    Analysis of human body movement is an important research area, specially for health applications. In order to assess the quality of life of people with mobility problems like Parkinson’s disease o stroke patients, it is crucial to monitor and assess their daily life activities. The main goal of this work is the characterization of basic activities using a single triaxial accelerometer located at the waist. This paper presents a novel postural detection algorithm based in SVM methods which is ...

  1. Diagnosis of Elevator Faults with LS-SVM Based on Optimization by K-CV

    Zhou Wan; Shilin Yi; Kun Li; Ran Tao; Min Gou; Xinshi Li; Shu Guo

    2015-01-01

    Several common elevator malfunctions were diagnosed with a least square support vector machine (LS-SVM). After acquiring vibration signals of various elevator functions, their energy characteristics and time domain indicators were extracted by theoretically analyzing the optimal wavelet packet, in order to construct a feature vector of malfunctions for identifying causes of the malfunctions as input of LS-SVM. Meanwhile, parameters about LS-SVM were optimized by K-fold cross validation (K-CV)...

  2. SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis

    Kisku, Dakshina Ranjan; Sing, Jamuna Kanta; Gupta, Phalguni

    2010-01-01

    Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set o...

  3. SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis

    Kisku, Dakshina Ranjan; Mehrotra, Hunny; Sing, Jamuna Kanta; Gupta, Phalguni

    2010-01-01

    Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orienta...

  4. SVM-based base-metal prospectivity modeling of the Aravalli Orogen, Northwestern India

    Porwal, Alok; Yu, Le; Gessner, Klaus

    2010-05-01

    The Proterozoic Aravalli orogen in the state of Rajasthan, northwestern India, constitutes the most important metallogenic province for base-metal deposits in India and hosts the entire economically viable lead-zinc resource-base of the country. The orogen evolved through near-orderly Wilson cycles of repeated extensional and compressional tectonics resulting in sequential opening and closing of intracratonic rifts and amalgamation of crustal domains during a circa 1.0-Ga geological history from 2.2 Ga to 1.0 Ga. This study develops a conceptual tectonostratigraphic model of the orogen based on a synthesis of the available geological, geophysical and geochronological data followed by deep-seismic-reflectivity-constrained 2-D forward gravity modeling, and links it to the Proterozoic base-metal metallogeny in the orogen in order to identify key geological controls on the base-metal mineralization. These controls are translated into exploration criteria for base-metal deposits, validated using empirical spatial analysis, and used to derive input spatial variables for model-based base-metal prospectivity mapping of the orogen. A support vector machine (SVM) algorithm augmented by incorporating a feature selection procedure is used in a GIS environment to implement the prospectivity mapping. A comparison of the SVM-derived prospectivity map with the ones derived using other established models such as neural-networks, logistic regression, and Bayesian weights-of-evidence indicates that the SVM outperforms other models, which is attributed to the capability of the SVM to return robust classification based on small training datasets.

  5. Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin, Soft-Margin

    Aksu, Yaman

    2012-01-01

    Margin maximization in the hard-margin sense, proposed as feature elimination criterion by the MFE-LO method, is combined here with data radius utilization to further aim to lower generalization error, as several published bounds and bound-related formulations pertaining to lowering misclassification risk (or error) pertain to radius e.g. product of squared radius and weight vector squared norm. Additionally, we propose additional novel feature elimination criteria that, while instead being i...

  6. SVM Based Navigation Patterns for Efficient Relevance Feedback in Content Based Image Retrieval

    Vaishali Meshram

    2012-07-01

    Full Text Available Image retrieval based on image content has become a hot topic in the field of image processing and computer vision. Content-based image retrieval (CBIR is the basis of image retrieval systems. Unlike traditional database queries, content-based multimedia retrieval queries are imprecise in nature which makes it difficult for users to express their exact information need in the form of a precise right query. To be more profitable, relevance feedback techniques were incorporated into CBIR such that more precise results can be obtained by taking user’s feedbacks into account. However, existing relevance feedback based CBIR methods usually request a number of iterative feedbacks to produce refined search results, especially in a large-scale image database. This is impractical and inefficient in real applications. This paper studies about the research on ways to extend and improve query methods for image databases is widespread, we have developed the QBIC (Query by Image Content system to explore content-based retrieval methods. To achieve the high efficiency and effectiveness of CBIR we are using two type of methods for feature extraction like SVM (support vector machineand NPRF(navigation-pattern based relevance feedback. By using SVM classifier as a category predictor of query and database images, they are exploited at first to filter out irrelevant images by its different low-level, concept and key point-based features. Thus we may reduce the size of query search in the data base then we may apply NPRF algorithm and refinement strategies for further extraction. In terms of efficiency, the iterations of feedback are reduced by using the navigation patterns discovered from the user query log. In terms of effectiveness, the proposed search algorithm NPRF Search makes use of the discovered navigation patterns and three kinds of query refinement strategies, Query Point Movement (QPM, Query Reweighting (QR, and Query Expansion (QEX, to converge the search space toward the user’s intention effectively. By using NPRF method, high quality of image retrieval on RF can be achieved in a small number of feedbacks

  7. SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy

    Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui

    2014-01-01

    Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063

  8. Performance Analysis of a DTC and SVM Based Field-Orientation Control Induction Motor Drive

    Md. Rashedul Islam; Pintu Kumar Sadhu; Md. Maruful Islam; Md. Kamal Hossain

    2015-01-01

    This study presents a performance analysis of two most popular control strategies for Induction Motor (IM) drives: direct torque control (DTC) and space vector modulation (SVM) strategies. The performance analysis is done by applying field-orientation control (FOC) technique because of its good dynamic response. The theoretical principle, simulation results are discussed to study the dynamic performances of the drive system for individual control strategies using actual parameters of inductio...

  9. Performance Analysis of a DTC and SVM Based Field-Orientation Control Induction Motor Drive

    Md. Rashedul Islam

    2015-02-01

    Full Text Available This study presents a performance analysis of two most popular control strategies for Induction Motor (IM drives: direct torque control (DTC and space vector modulation (SVM strategies. The performance analysis is done by applying field-orientation control (FOC technique because of its good dynamic response. The theoretical principle, simulation results are discussed to study the dynamic performances of the drive system for individual control strategies using actual parameters of induction motor. A closed loop PI controller scheme has been used. The main purpose of this study is to minimize ripple in torque response curve and to achieve quick speed response as well as to investigate the condition for optimum performance of induction motor drive. Depending on the simulation results this study also presents a detailed comparison between direct torque control and space vector modulation based field-orientation control method for the induction motor drive.

  10. [Application of optimized parameters SVM based on photoacoustic spectroscopy method in fault diagnosis of power transformer].

    Zhang, Yu-xin; Cheng, Zhi-feng; Xu, Zheng-ping; Bai, Jing

    2015-01-01

    In order to solve the problems such as complex operation, consumption for the carrier gas and long test period in traditional power transformer fault diagnosis approach based on dissolved gas analysis (DGA), this paper proposes a new method which is detecting 5 types of characteristic gas content in transformer oil such as CH4, C2H2, C2H4, C2H6 and H2 based on photoacoustic Spectroscopy and C2H2/C2H4, CH4/H2, C2H4/C2H6 three-ratios data are calculated. The support vector machine model was constructed using cross validation method under five support vector machine functions and four kernel functions, heuristic algorithms were used in parameter optimization for penalty factor c and g, which to establish the best SVM model for the highest fault diagnosis accuracy and the fast computing speed. Particles swarm optimization and genetic algorithm two types of heuristic algorithms were comparative studied in this paper for accuracy and speed in optimization. The simulation result shows that SVM model composed of C-SVC, RBF kernel functions and genetic algorithm obtain 97. 5% accuracy in test sample set and 98. 333 3% accuracy in train sample set, and genetic algorithm was about two times faster than particles swarm optimization in computing speed. The methods described in this paper has many advantages such as simple operation, non-contact measurement, no consumption for the carrier gas, long test period, high stability and sensitivity, the result shows that the methods described in this paper can instead of the traditional transformer fault diagnosis by gas chromatography and meets the actual project needs in transformer fault diagnosis. PMID:25993810

  11. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    Daleesha M Viswanathan; Sumam Mary Idicula

    2015-01-01

    In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL) on facial expression recognition is scanty. Contrary to American Sign Language (ASL), head movements are less conspicuous in ISL and the answers to questions...

  12. Linear SVM-Based Android Malware Detection for Reliable IoT Services

    Hyo-Sik Ham

    2014-01-01

    Full Text Available Current many Internet of Things (IoT services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers are expanding the scope of their attacks beyond the existing PC and Internet environment into mobile devices. In this paper, we apply a linear support vector machine (SVM to detect Android malware and compare the malware detection performance of SVM with that of other machine learning classifiers. Through experimental validation, we show that the SVM outperforms other machine learning classifiers.

  13. An SVM-Based Solution for Fault Detection in Wind Turbines

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  14. Fast and Accurate Construction of Confidence Intervals for Heritability.

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  15. Niche Genetic Algorithm with Accurate Optimization Performance

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  16. How accurately can we calculate thermal systems?

    The objective was to determine how accurately simple reactor lattice integral parameters can be determined, considering user input, differences in the methods, source data and the data processing procedures and assumptions. Three simple square lattice test cases with different fuel to moderator ratios were defined. The effect of the thermal scattering models were shown to be important and much bigger than the spread in the results. Nevertheless, differences of up to 0.4% in the K-eff calculated by continuous energy Monte Carlo codes were observed even when the same source data were used. (author)

  17. Accurate diagnosis is essential for amebiasis

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  18. Investigations on Accurate Analysis of Microstrip Reflectarrays

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  19. Accurate phylogenetic classification of DNA fragments based onsequence composition

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  20. Accurate radiative transfer calculations for layered media.

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  1. Accurate basis set truncation for wavefunction embedding

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  2. Accurate pose estimation for forensic identification

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  3. Accurate determination of characteristic relative permeability curves

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  4. Accurate shear measurement with faint sources

    Zhang, Jun; Foucaud, Sebastien [Center for Astronomy and Astrophysics, Department of Physics and Astronomy, Shanghai Jiao Tong University, 955 Jianchuan road, Shanghai, 200240 (China); Luo, Wentao, E-mail: betajzhang@sjtu.edu.cn, E-mail: walt@shao.ac.cn, E-mail: foucaud@sjtu.edu.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai, 200030 (China)

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  5. Evidence Based Selection of Housekeeping Genes

    de Jonge, Hendrik J.M.; Fehrmann, Rudolf S. N.; Eveline S. J. M. de Bont; Hofstra, Robert M. W.; Gerbens, Frans; Kamps, Willem A.; Vries, Elisabeth G. E.; van der Zee, Ate G.J.; te Meerman, Gerard J.; ter Elst, Arja

    2007-01-01

    For accurate and reliable gene expression analysis, normalization of gene expression data against housekeeping genes (reference or internal control genes) is required. It is known that commonly used housekeeping genes (e.g. ACTB, GAPDH, HPRT1, and B2M) vary considerably under different experimental conditions and therefore their use for normalization is limited. We performed a meta-analysis of 13,629 human gene array samples in order to identify the most stable expressed genes. Here we show n...

  6. Microarray results: how accurate are they?

    Mane Shrikant

    2002-08-01

    Full Text Available Abstract Background DNA microarray technology is a powerful technique that was recently developed in order to analyze thousands of genes in a short time. Presently, microarrays, or chips, of the cDNA type and oligonucleotide type are available from several sources. The number of publications in this area is increasing exponentially. Results In this study, microarray data obtained from two different commercially available systems were critically evaluated. Our analysis revealed several inconsistencies in the data obtained from the two different microarrays. Problems encountered included inconsistent sequence fidelity of the spotted microarrays, variability of differential expression, low specificity of cDNA microarray probes, discrepancy in fold-change calculation and lack of probe specificity for different isoforms of a gene. Conclusions In view of these pitfalls, data from microarray analysis need to be interpreted cautiously.

  7. Accurate Telescope Mount Positioning with MEMS Accelerometers

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  8. Accurate estimation of indoor travel times

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...

  9. Accurate sky background modelling for ESO facilities

    Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)

  10. Toward Accurate and Quantitative Comparative Metagenomics.

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  11. Accurate valence band width of diamond

    An accurate width is determined for the valence band of diamond by imaging photoelectron momentum distributions for a variety of initial- and final-state energies. The experimental result of 23.0±0.2 eV2 agrees well with first-principles quasiparticle calculations (23.0 and 22.88 eV) and significantly exceeds the local-density-functional width, 21.5±0.2 eV2. This difference quantifies effects of creating an excited hole state (with associated many-body effects) in a band measurement vs studying ground-state properties treated by local-density-functional calculations. copyright 1997 The American Physical Society

  12. Accurate Weather Forecasting for Radio Astronomy

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  13. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  14. Many accurate small-discriminatory feature subsets exist in microarray transcript data: biomarker discovery

    Grate Leslie R

    2005-04-01

    Full Text Available Abstract Background Molecular profiling generates abundance measurements for thousands of gene transcripts in biological samples such as normal and tumor tissues (data points. Given such two-class high-dimensional data, many methods have been proposed for classifying data points into one of the two classes. However, finding very small sets of features able to correctly classify the data is problematic as the fundamental mathematical proposition is hard. Existing methods can find "small" feature sets, but give no hint how close this is to the true minimum size. Without fundamental mathematical advances, finding true minimum-size sets will remain elusive, and more importantly for the microarray community there will be no methods for finding them. Results We use the brute force approach of exhaustive search through all genes, gene pairs (and for some data sets gene triples. Each unique gene combination is analyzed with a few-parameter linear-hyperplane classification method looking for those combinations that form training error-free classifiers. All 10 published data sets studied are found to contain predictive small feature sets. Four contain thousands of gene pairs and 6 have single genes that perfectly discriminate. Conclusion This technique discovered small sets of genes (3 or less in published data that form accurate classifiers, yet were not reported in the prior publications. This could be a common characteristic of microarray data, thus making looking for them worth the computational cost. Such small gene sets could indicate biomarkers and portend simple medical diagnostic tests. We recommend checking for small gene sets routinely. We find 4 gene pairs and many gene triples in the large hepatocellular carcinoma (HCC, Liver cancer data set of Chen et al. The key component of these is the "placental gene of unknown function", PLAC8. Our HMM modeling indicates PLAC8 might have a domain like part of lP59's crystal structure (a Non

  15. Fast and Provably Accurate Bilateral Filtering.

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  16. Accurate adiabatic correction in the hydrogen molecule

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  17. Accurate fission data for nuclear safety

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  18. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  19. Towards a more accurate concept of fuels

    Full text: The introduction of LEU in Atucha and the approval of CARA show an advancement of the Argentine power stations fuels, which stimulate and show a direction to follow. In the first case, the use of enriched U fuel relax an important restriction related to neutronic economy; that means that it is possible to design less penalized fuels using more Zry. The second case allows a decrease in the lineal power of the rods, enabling a better performance of the fuel in normal and also in accident conditions. In this work we wish to emphasize this last point, trying to find a design in which the surface power of the rod is diminished. Hence, in accident conditions owing to lack of coolant, the cladding tube will not reach temperatures that will produce oxidation, with the corresponding H2 formation and with plasticity enough to form blisters, which will obstruct the reflooding and hydration that will produce fragility and rupture of the cladding tube, with the corresponding radioactive material dispersion. This work is oriented to find rods designs with quasi rectangular geometry to lower the surface power of the rods, in order to obtain a lower central temperature of the rod. Thus, critical temperatures will not be reached in case of lack of coolant. This design is becoming a reality after PPFAE's efforts in search of cladding tubes fabrication with different circumferential values, rectangular in particular. This geometry, with an appropriate pellet design, can minimize the pellet-cladding interaction and, through the accurate width election, non rectified pellets could be used. This means an important economy in pellets production, as well as an advance in the fabrication of fuels in gloves box and hot cells in the future. The sequence to determine critical geometrical parameters is described and some rod dispositions are explored

  20. Accurate orbit propagation with planetary close encounters

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  1. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Towards Accurate Application Characterization for Exascale (APEX)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  3. Accurate hydrocarbon estimates attained with radioactive isotope

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  4. Fast, accurate standardless XRF analysis with IQ+

    Full text: Due to both chemical and physical effects, the most accurate XRF data are derived from calibrations set up using in-type standards, necessitating some prior knowledge of the samples being analysed. Whilst this is often the case for routine samples, particularly in production control, for completely unknown samples the identification and availability of in-type standards can be problematic. Under these circumstances standardless analysis can offer a viable solution. Successful analysis of completely unknown samples requires a complete chemical overview of the speciemen together with the flexibility of a fundamental parameters (FP) algorithm to handle wide-ranging compositions. Although FP algorithms are improving all the time, most still require set-up samples to define the spectrometer response to a particular element. Whilst such materials may be referred to as standards, the emphasis in this kind of analysis is that only a single calibration point is required per element and that the standard chosen does not have to be in-type. The high sensitivities of modern XRF spectrometers, together with recent developments in detector counting electronics that possess a large dynamic range and high-speed data processing capacity bring significant advances to fast, standardless analysis. Illustrated with a tantalite-columbite heavy-mineral concentrate grading use-case, this paper will present the philosophy behind the semi-quantitative IQ+ software and the required hardware. This combination can give a rapid scan-based overview and quantification of the sample in less than two minutes, together with the ability to define channels for specific elements of interest where higher accuracy and lower levels of quantification are required. The accuracy, precision and limitations of standardless analysis will be assessed using certified reference materials of widely differing chemical and physical composition. Copyright (2002) Australian X-ray Analytical Association Inc

  5. An accurate and efficient experimental approach for characterization of the complex oral microbiota

    Zheng, Wei; Tsompana, Maria; Ruscitto, Angela; Sharma, Ashu; Genco, Robert; Sun, Yijun; Buck, Michael J.

    2015-01-01

    Background Currently, taxonomic interrogation of microbiota is based on amplification of 16S rRNA gene sequences in clinical and scientific settings. Accurate evaluation of the microbiota depends heavily on the primers used, and genus/species resolution bias can arise with amplification of non-representative genomic regions. The latest Illumina MiSeq sequencing chemistry has extended the read length to 300 bp, enabling deep profiling of large number of samples in a single paired-end reaction ...

  6. Accurate Jones Matrix of the Practical Faraday Rotator

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  7. Selective pressures for accurate altruism targeting: evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory

    Clune, Jeff; Goldsby, Heather J.; Ofria, Charles; Pennock, Robert T

    2010-01-01

    Inclusive fitness theory predicts that natural selection will favour altruist genes that are more accurate in targeting altruism only to copies of themselves. In this paper, we provide evidence from digital evolution in support of this prediction by competing multiple altruist-targeting mechanisms that vary in their accuracy in determining whether a potential target for altruism carries a copy of the altruist gene. We compete altruism-targeting mechanisms based on (i) kinship (kin targeting),...

  8. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  9. Accurate formulas for the penalty caused by interferometric crosstalk

    Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle

    2000-01-01

    New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....

  10. A new, accurate predictive model for incident hypertension

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  11. 78 FR 34604 - Submitting Complete and Accurate Information

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  12. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  13. Selective pressures for accurate altruism targeting: evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory.

    Clune, Jeff; Goldsby, Heather J; Ofria, Charles; Pennock, Robert T

    2011-03-01

    Inclusive fitness theory predicts that natural selection will favour altruist genes that are more accurate in targeting altruism only to copies of themselves. In this paper, we provide evidence from digital evolution in support of this prediction by competing multiple altruist-targeting mechanisms that vary in their accuracy in determining whether a potential target for altruism carries a copy of the altruist gene. We compete altruism-targeting mechanisms based on (i) kinship (kin targeting), (ii) genetic similarity at a level greater than that expected of kin (similarity targeting), and (iii) perfect knowledge of the presence of an altruist gene (green beard targeting). Natural selection always favoured the most accurate targeting mechanism available. Our investigations also revealed that evolution did not increase the altruism level when all green beard altruists used the same phenotypic marker. The green beard altruism levels stably increased only when mutations that changed the altruism level also changed the marker (e.g. beard colour), such that beard colour reliably indicated the altruism level. For kin- and similarity-targeting mechanisms, we found that evolution was able to stably adjust altruism levels. Our results confirm that natural selection favours altruist genes that are increasingly accurate in targeting altruism to only their copies. Our work also emphasizes that the concept of targeting accuracy must include both the presence of an altruist gene and the level of altruism it produces. PMID:20843843

  14. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  15. Accurate calculation of diffraction-limited encircled and ensquared energy.

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  16. Accurately bearing measurement in non-cooperative passive location system

    The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)

  17. Gene finding in novel genomes

    Korf Ian

    2004-05-01

    Full Text Available Abstract Background Computational gene prediction continues to be an important problem, especially for genomes with little experimental data. Results I introduce the SNAP gene finder which has been designed to be easily adaptable to a variety of genomes. In novel genomes without an appropriate gene finder, I demonstrate that employing a foreign gene finder can produce highly inaccurate results, and that the most compatible parameters may not come from the nearest phylogenetic neighbor. I find that foreign gene finders are more usefully employed to bootstrap parameter estimation and that the resulting parameters can be highly accurate. Conclusion Since gene prediction is sensitive to species-specific parameters, every genome needs a dedicated gene finder.

  18. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome

    Dewey Colin N

    2011-08-01

    Full Text Available Abstract Background RNA-Seq is revolutionizing the way transcript abundances are measured. A key challenge in transcript quantification from RNA-Seq data is the handling of reads that map to multiple genes or isoforms. This issue is particularly important for quantification with de novo transcriptome assemblies in the absence of sequenced genomes, as it is difficult to determine which transcripts are isoforms of the same gene. A second significant issue is the design of RNA-Seq experiments, in terms of the number of reads, read length, and whether reads come from one or both ends of cDNA fragments. Results We present RSEM, an user-friendly software package for quantifying gene and isoform abundances from single-end or paired-end RNA-Seq data. RSEM outputs abundance estimates, 95% credibility intervals, and visualization files and can also simulate RNA-Seq data. In contrast to other existing tools, the software does not require a reference genome. Thus, in combination with a de novo transcriptome assembler, RSEM enables accurate transcript quantification for species without sequenced genomes. On simulated and real data sets, RSEM has superior or comparable performance to quantification methods that rely on a reference genome. Taking advantage of RSEM's ability to effectively use ambiguously-mapping reads, we show that accurate gene-level abundance estimates are best obtained with large numbers of short single-end reads. On the other hand, estimates of the relative frequencies of isoforms within single genes may be improved through the use of paired-end reads, depending on the number of possible splice forms for each gene. Conclusions RSEM is an accurate and user-friendly software tool for quantifying transcript abundances from RNA-Seq data. As it does not rely on the existence of a reference genome, it is particularly useful for quantification with de novo transcriptome assemblies. In addition, RSEM has enabled valuable guidance for cost

  19. Accurate backgrounds to Higgs production at the LHC

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  20. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  1. Accurate wall thickness measurement using autointerference of circumferential Lamb wave

    In this paper, a method of accurately measuring the pipe wall thickness by using noncontact air-coupled ultrasonic transducer (NAUT) was presented. In this method, accurate measurement of angular wave number (AWN) is a key technique because the AWN is changes minutely with the wall thickness. An autointerference of the circumferential (C-) Lamb wave was used for accurate measurements of the AWN. Principle of the method was first explained. Modified method for measuring the wall thickness near a butt weld line was also proposed and its accuracy was evaluated within 6 μm error. It was also shown in the paper that wall thickness measurement was accurately carried out beyond the difference among the sensors by calibrating the frequency response of the sensors. (author)

  2. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  3. Accurately measuring dynamic coefficient of friction in ultraform finishing

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  4. Simple and accurate analytical calculation of shortest path lengths

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  5. Accurate and Simple Calibration of DLP Projector Systems

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...... require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...... does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of...

  6. Accurate level set method for simulations of liquid atomization☆

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  7. Accurate nuclear radii and binding energies from a chiral interaction

    Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W

    2015-01-01

    The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  8. On the Use of Time–Frequency Reassignment and SVM-Based Classifier for Audio Surveillance Applications

    Souli S. Sameh

    2014-11-01

    Full Text Available In this paper, we propose a robust environmental sound spectrogram classification approach. Its purpose is surveillance and security applications based on the reassignment method and log-Gabor filters. Besides, the reassignment method is applied to the spectrogram to improve the readability of the time-frequency representation, and to assure a better localization of the signal components. Our approach includes three methods. In the first two methods, the reassigned spectrograms are passed through appropriate log-Gabor filter banks and the outputs are averaged and underwent an optimal feature selection procedure based on a mutual information criterion. The third method uses the same steps but applied only to three patches extracted from each reassigned spectrogram. The proposed approach is tested on a large database consists of 1000 sounds belonging to ten classes. The recognition is based on Multiclass Support Vector Machines.

  9. DTC-SVM Based on PI Torque and PI Flux Controllers to Achieve High Performance of Induction Motor

    Hassan Farhan Rashag

    2014-01-01

    Full Text Available The fundamental idea of direct torque control of induction machines is investigated in order to emphasize the property produced by a given voltage vector on stator flux and torque variations. The proposed control system is based on Space Vector Modulation (SVM of electrical machines, Improvement model reference adaptive system, real time of stator resistance and estimation of stator flux. The purpose of this control is to minimize electromagnetic torque and flux ripple and minimizing distortion of stator current. In this proposed method, PI torque and PI flux controller are designed to achieve estimated torque and flux with good tracking and fast response with reference torque and there is no steady state error. In addition, design of PI torque and PI flux controller are used to optimize voltages in d-q reference frame that applied to SVM. The simulation Results of proposed DTC-SVM have complete excellent performance in steady and transient states as compared with classical DTC-SVM.

  10. Binary classification SVM-based algorithms with interval-valued training data using triangular and Epanechnikov kernels.

    Utkin, Lev V; Chekh, Anatoly I; Zhuk, Yulia A

    2016-08-01

    Classification algorithms based on different forms of support vector machines (SVMs) for dealing with interval-valued training data are proposed in the paper. L2-norm and L∞-norm SVMs are used for constructing the algorithms. The main idea allowing us to represent the complex optimization problems as a set of simple linear or quadratic programming problems is to approximate the Gaussian kernel by the well-known triangular and Epanechnikov kernels. The minimax strategy is used to choose an optimal probability distribution from the set and to construct optimal separating functions. Numerical experiments illustrate the algorithms. PMID:27179616

  11. Classification of Convective and Stratiform Cells in Meteorological Radar Images Using SVM Based on a Textural Analysis

    Abdenasser Djafri; Boualem Haddad

    2014-01-01

    This contribution deals with the discrimination between stratiform and convective cells in meteorological radar images. This study is based on a textural analysis of the latter and their classification using a support vector machine (SVM). First, we apply different textural parameters such as energy, entropy, inertia, and local homogeneity. Through this experience, we identify the different textural features of both the stratiform and convective cells. Then, we use an SVM to find the best discriminating parameter between the two types of clouds. The main goal of this work is to better apply the Palmer and Marshall Z-R relations specific to each type of precipitation.

  12. SVM-Based Classification of Segmented Airborne LiDAR Point Clouds in Urban Areas

    Xiaogang Ning; Xiangguo Lin; Jixian Zhang

    2013-01-01

    Object-based point cloud analysis (OBPA) is useful for information extraction from airborne LiDAR point clouds. An object-based classification method is proposed for classifying the airborne LiDAR point clouds in urban areas herein. In the process of classification, the surface growing algorithm is employed to make clustering of the point clouds without outliers, thirteen features of the geometry, radiometry, topology and echo characteristics are calculated, a support vector machine (SVM) is ...

  13. SVM-based prediction of propeptide cleavage sites in spider toxins identifies toxin innovation in an Australian tarantula.

    Emily S W Wong

    Full Text Available Spider neurotoxins are commonly used as pharmacological tools and are a popular source of novel compounds with therapeutic and agrochemical potential. Since venom peptides are inherently toxic, the host spider must employ strategies to avoid adverse effects prior to venom use. It is partly for this reason that most spider toxins encode a protective proregion that upon enzymatic cleavage is excised from the mature peptide. In order to identify the mature toxin sequence directly from toxin transcripts, without resorting to protein sequencing, the propeptide cleavage site in the toxin precursor must be predicted bioinformatically. We evaluated different machine learning strategies (support vector machines, hidden Markov model and decision tree and developed an algorithm (SpiderP for prediction of propeptide cleavage sites in spider toxins. Our strategy uses a support vector machine (SVM framework that combines both local and global sequence information. Our method is superior or comparable to current tools for prediction of propeptide sequences in spider toxins. Evaluation of the SVM method on an independent test set of known toxin sequences yielded 96% sensitivity and 100% specificity. Furthermore, we sequenced five novel peptides (not used to train the final predictor from the venom of the Australian tarantula Selenotypus plumipes to test the accuracy of the predictor and found 80% sensitivity and 99.6% 8-mer specificity. Finally, we used the predictor together with homology information to predict and characterize seven groups of novel toxins from the deeply sequenced venom gland transcriptome of S. plumipes, which revealed structural complexity and innovations in the evolution of the toxins. The precursor prediction tool (SpiderP is freely available on ArachnoServer (http://www.arachnoserver.org/spiderP.html, a web portal to a comprehensive relational database of spider toxins. All training data, test data, and scripts used are available from the SpiderP website.

  14. SVM-based prediction of propeptide cleavage sites in spider toxins identifies toxin innovation in an Australian tarantula.

    Wong, Emily S W; Hardy, Margaret C; Wood, David; Bailey, Timothy; King, Glenn F

    2013-01-01

    Spider neurotoxins are commonly used as pharmacological tools and are a popular source of novel compounds with therapeutic and agrochemical potential. Since venom peptides are inherently toxic, the host spider must employ strategies to avoid adverse effects prior to venom use. It is partly for this reason that most spider toxins encode a protective proregion that upon enzymatic cleavage is excised from the mature peptide. In order to identify the mature toxin sequence directly from toxin transcripts, without resorting to protein sequencing, the propeptide cleavage site in the toxin precursor must be predicted bioinformatically. We evaluated different machine learning strategies (support vector machines, hidden Markov model and decision tree) and developed an algorithm (SpiderP) for prediction of propeptide cleavage sites in spider toxins. Our strategy uses a support vector machine (SVM) framework that combines both local and global sequence information. Our method is superior or comparable to current tools for prediction of propeptide sequences in spider toxins. Evaluation of the SVM method on an independent test set of known toxin sequences yielded 96% sensitivity and 100% specificity. Furthermore, we sequenced five novel peptides (not used to train the final predictor) from the venom of the Australian tarantula Selenotypus plumipes to test the accuracy of the predictor and found 80% sensitivity and 99.6% 8-mer specificity. Finally, we used the predictor together with homology information to predict and characterize seven groups of novel toxins from the deeply sequenced venom gland transcriptome of S. plumipes, which revealed structural complexity and innovations in the evolution of the toxins. The precursor prediction tool (SpiderP) is freely available on ArachnoServer (http://www.arachnoserver.org/spiderP.html), a web portal to a comprehensive relational database of spider toxins. All training data, test data, and scripts used are available from the SpiderP website. PMID:23894279

  15. SVM-Based Prediction of Propeptide Cleavage Sites in Spider Toxins Identifies Toxin Innovation in an Australian Tarantula

    Wong, Emily S. W.; Hardy, Margaret C.; David Wood; Timothy Bailey; Glenn F. King

    2013-01-01

    Spider neurotoxins are commonly used as pharmacological tools and are a popular source of novel compounds with therapeutic and agrochemical potential. Since venom peptides are inherently toxic, the host spider must employ strategies to avoid adverse effects prior to venom use. It is partly for this reason that most spider toxins encode a protective proregion that upon enzymatic cleavage is excised from the mature peptide. In order to identify the mature toxin sequence directly from toxin tran...

  16. Detection of subsurface metallic utilities by means of a SAP technique: Comparing MUSIC- and SVM-based approaches

    Meschino, Simone; Pajewski, Lara; Pastorino, Matteo; Randazzo, Andrea; Schettini, Giuseppe

    2013-10-01

    The identification of buried cables, pipes, conduits, and other cylindrical utilities is a very important task in civil engineering. In the last years, several methods have been proposed in the literature for tackling this problem. Most commonly employed approaches are based on the use of Ground Penetrating Radars, i.e., they extract the needed information about the unknown scenario starting from the electromagnetic field collected by a set of antennas. In the present paper, a statistical method, based on the use of smart antenna techniques, is used for the localization of a single buried object. In particular, two efficient algorithms for the estimation of the directions of arrival of the electromagnetic waves scattered by the targets, namely the MUltiple SIgnal Classification and the Support Vector Regression, are considered and their performances are compared.

  17. Full-polarization radar remote sensing and data mining for tropical crops mapping: a successful SVM-based classification model

    Denize, J.; Corgne, S.; Todoroff, P.; LE Mezo, L.

    2015-12-01

    In Reunion, a tropical island of 2,512 km², 700 km east of Madagascar in the Indian Ocean, constrained by a rugged relief, agricultural sectors are competing in highly fragmented agricultural land constituted by heterogeneous farming systems from corporate to small-scale farming. Policymakers, planners and institutions are in dire need of reliable and updated land use references. Actually conventional land use mapping methods are inefficient under the tropic with frequent cloud cover and loosely synchronous vegetative cycles of the crops due to a constant temperature. This study aims to provide an appropriate method for the identification and mapping of tropical crops by remote sensing. For this purpose, we assess the potential of polarimetric SAR imagery associated with associated with machine learning algorithms. The method has been developed and tested on a study area of 25*25 km thanks to 6 RADARSAT-2 images in 2014 in full-polarization. A set of radar indicators (backscatter coefficient, bands ratios, indices, polarimetric decompositions (Freeman-Durden, Van zyl, Yamaguchi, Cloude and Pottier, Krogager), texture, etc.) was calculated from the coherency matrix. A random forest procedure allowed the selection of the most important variables on each images to reduce the dimension of the dataset and the processing time. Support Vector Machines (SVM), allowed the classification of these indicators based on a learning database created from field observations in 2013. The method shows an overall accuracy of 88% with a Kappa index of 0.82 for the identification of four major crops.

  18. A COMPARISON STUDY OF DIFFERENT KERNEL FUNCTIONS FOR SVM-BASED CLASSIFICATION OF MULTI-TEMPORAL POLARIMETRY SAR DATA

    B. Yekkehkhany

    2014-10-01

    Full Text Available In this paper, a framework is developed based on Support Vector Machines (SVM for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF. The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  19. Accurate Measurement of the Relative Abundance of Different DNA Species in Complex DNA Mixtures

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-01-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription. PMID:22334570

  20. Equivalent method for accurate solution to linear interval equations

    王冲; 邱志平

    2013-01-01

    Based on linear interval equations, an accurate interval finite element method for solving structural static problems with uncertain parameters in terms of optimization is discussed. On the premise of ensuring the consistency of solution sets, the original interval equations are equivalently transformed into some deterministic inequations. On this basis, calculating the structural displacement response with interval parameters is predigested to a number of deterministic linear optimization problems. The results are proved to be accurate to the interval governing equations. Finally, a numerical example is given to demonstrate the feasibility and efficiency of the proposed method.

  1. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  2. Is Expressive Language Disorder an Accurate Diagnostic Category?

    Leonard, Laurence B.

    2009-01-01

    Purpose: To propose that the diagnostic category of "expressive language disorder" as distinct from a disorder of both expressive and receptive language might not be accurate. Method: Evidence that casts doubt on a pure form of this disorder is reviewed from several sources, including the literature on genetic findings, theories of language…

  3. Accurate momentum transfer cross section for the attractive Yukawa potential

    Khrapak, S. A., E-mail: Sergey.Khrapak@dlr.de [Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen (Germany)

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  4. Is a Writing Sample Necessary for "Accurate Placement"?

    Sullivan, Patrick; Nielsen, David

    2009-01-01

    The scholarship about assessment for placement is extensive and notoriously ambiguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "accurate placement"? Using a robust data sample of student assessment essays and ACCUPLACER test scores, we put this question to the…

  5. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  6. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless s

  7. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  8. Accurate Period Approximation for Any Simple Pendulum Amplitude

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  9. Second-order accurate nonoscillatory schemes for scalar conservation laws

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  10. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  11. Accurate momentum transfer cross section for the attractive Yukawa potential

    Khrapak, Sergey

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within 2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  12. Accurate momentum transfer cross section for the attractive Yukawa potential

    Khrapak, S. A.

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within $\\pm 2\\%$ in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  13. On the importance of having accurate data for astrophysical modelling

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  14. Occupancy by key transcription factors is a more accurate predictor of enhancer activity than histone modifications or chromatin accessibility

    Dogan, Nergiz; Wu, Weisheng; Morrissey, Christapher S.; Chen, Kuan-Bei; Stonestrom, Aaron; Long, Maria; Keller, Cheryl A.; Cheng, Yong; Jain, Deepti; Visel, Axel; Pennacchio, Len A.; Weiss, Mitchell J.; Blobel, Gerd A.; Hardison, Ross C.

    2015-01-01

    Background Regulated gene expression controls organismal development, and variation in regulatory patterns has been implicated in complex traits. Thus accurate prediction of enhancers is important for further understanding of these processes. Genome-wide measurement of epigenetic features, such as histone modifications and occupancy by transcription factors, is improving enhancer predictions, but the contribution of these features to prediction accuracy is not known. Given the importance of t...

  15. On the relation between gene flow theory and genetic gain

    Bijma, P; Woolliams, John

    2000-01-01

    In conventional gene flow theory the rate of genetic gain is calculated as the summed products of genetic selection differential and asymptotic proportion of genes deriving from sex-age groups. Recent studies have shown that asymptotic proportions of genes predicted from conventional gene flow theory may deviate considerably from true proportions. However, the rate of genetic gain predicted from conventional gene flow theory was accurate. The current note shows that the connection between asy...

  16. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid. PMID:25624198

  17. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  18. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  19. Accurate multireference study of Si3 electronic manifold

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  20. Simple and High-Accurate Schemes for Hyperbolic Conservation Laws

    Renzhong Feng

    2014-01-01

    Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.

  1. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  2. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  3. Accurate Load Modeling Based on Analytic Hierarchy Process

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  4. Accurate adjoint design sensitivities for nano metal optics.

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  5. Efficient and Accurate Robustness Estimation for Large Complex Networks

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  6. The FLUKA code: An accurate simulation tool for particle therapy

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  7. A novel automated image analysis method for accurate adipocyte quantification

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  8. Strategy Guideline. Accurate Heating and Cooling Load Calculations

    Burdick, Arlan [IBACOS, Inc., Pittsburgh, PA (United States)

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  9. Strategy Guideline: Accurate Heating and Cooling Load Calculations

    Burdick, A.

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  10. Evaluation of accurate eye corner detection methods for gaze estimation

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  11. Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Daftry, Shreyansh; Hoppe, Christof; Bischof, Horst

    2015-01-01

    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstruc...

  12. Mouse models of human AML accurately predict chemotherapy response

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to co...

  13. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Zhang Mingheng; Zhen Yaobao; Hui Ganglong; Chen Gang

    2013-01-01

    Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM) are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the mul...

  14. Accurate calibration of stereo cameras for machine vision

    Li, Liangfu; Feng, Zuren; Feng, Yuanjing

    2004-01-01

    Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...

  15. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Mark Shortis

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation a...

  16. Fast and Accurate Bilateral Filtering using Gauss-Polynomial Decomposition

    Chaudhury, Kunal N.

    2015-01-01

    The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A widely-used form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires $O(\\sigma^2)$ operations per pixel, where $\\sigma$ is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approxi...

  17. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  18. Dejavu: An Accurate Energy-Efficient Outdoor Localization System

    Aly, Heba; Youssef, Moustafa

    2013-01-01

    We present Dejavu, a system that uses standard cell-phone sensors to provide accurate and energy-efficient outdoor localization suitable for car navigation. Our analysis shows that different road landmarks have a unique signature on cell-phone sensors; For example, going inside tunnels, moving over bumps, going up a bridge, and even potholes all affect the inertial sensors on the phone in a unique pattern. Dejavu employs a dead-reckoning localization approach and leverages these road landmark...

  19. Accurate Parameter Estimation for Unbalanced Three-Phase System

    Yuan Chen; Hing Cheung So

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...

  20. Accurate, inexpensive testing of laser pointer power for safe operation

    An accurate, inexpensive test-bed for the measurement of optical power emitted from handheld lasers is described. The setup consists of a power meter, optical bandpass filters, an adjustable iris and self-centering lens mounts. We demonstrate this test-bed by evaluating the output power of 23 laser pointers with respect to the limits imposed by the US Code of Federal Regulations. We find a compliance rate of only 26%. A discussion of potential laser pointer hazards is included. (paper)

  1. DOMAC: an accurate, hybrid protein domain prediction server

    Cheng, Jianlin

    2007-01-01

    Protein domain prediction is important for protein structure prediction, structure determination, function annotation, mutagenesis analysis and protein engineering. Here we describe an accurate protein domain prediction server (DOMAC) combining both template-based and ab initio methods. The preliminary version of the server was ranked among the top domain prediction servers in the seventh edition of Critical Assessment of Techniques for Protein Structure Prediction (CASP7), 2006. DOMAC server...

  2. A multiple more accurate Hardy-Littlewood-Polya inequality

    Qiliang Huang

    2012-11-01

    Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.

  3. Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations

    Ro, Stephen

    2013-01-01

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient the solutions match the outcome of shock acceleration in exponential atmospheres.

  4. Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations

    Ro, Stephen; Matzner, Christopher D.

    2013-08-01

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.

  5. SHOCK EMERGENCE IN SUPERNOVAE: LIMITING CASES AND ACCURATE APPROXIMATIONS

    Ro, Stephen; Matzner, Christopher D. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4 (Canada)

    2013-08-10

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.

  6. An accurate and robust gyroscope-gased pedometer.

    Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T

    2008-01-01

    Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737

  7. Accurate calculation of thermal noise in multilayer coating

    Gurkovsky, Alexey; Vyatchanin, Sergey

    2010-01-01

    We derive accurate formulas for thermal fluctuations in multilayer interferometric coating taking into account light propagation inside the coating. In particular, we calculate the reflected wave phase as a function of small displacements of the boundaries between the layers using transmission line model for interferometric coating and derive formula for spectral density of reflected phase in accordance with Fluctuation-Dissipation Theorem. We apply the developed approach for calculation of t...

  8. Novel multi-beam radiometers for accurate ocean surveillance

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.;

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions, co......, conical scanners and push-broom antennas are compared. The comparison will cover reflector optics and focal plane array configuration....

  9. Strategy for accurate liver intervention by an optical tracking system

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Guan, Peifeng; Xiao, Weihu; Wu, Xiaoming

    2015-01-01

    Image-guided navigation for radiofrequency ablation of liver tumors requires the accurate guidance of needle insertion into a tumor target. The main challenge of image-guided navigation for radiofrequency ablation of liver tumors is the occurrence of liver deformations caused by respiratory motion. This study reports a strategy of real-time automatic registration to track custom fiducial markers glued onto the surface of a patient’s abdomen to find the respiratory phase, in which the static p...

  10. Efficient and Accurate Path Cost Estimation Using Trajectory Data

    Dai, Jian; Yang, Bin; Guo, Chenjuan; Jensen, Christian S.

    2015-01-01

    Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more ...

  11. Accurate Identification of Fear Facial Expressions Predicts Prosocial Behavior

    Marsh, Abigail A.; Kozak, Megan N.; Ambady, Nalini

    2007-01-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will beha...

  12. Continuous glucose monitors prove highly accurate in critically ill children

    Bridges, Brian C.; Preissig, Catherine M; Maher, Kevin O.; Rigby, Mark R

    2010-01-01

    Introduction Hyperglycemia is associated with increased morbidity and mortality in critically ill patients and strict glycemic control has become standard care for adults. Recent studies have questioned the optimal targets for such management and reported increased rates of iatrogenic hypoglycemia in both critically ill children and adults. The ability to provide accurate, real-time continuous glucose monitoring would improve the efficacy and safety of this practice in critically ill patients...

  13. Accurate quantum state estimation via "Keeping the experimentalist honest"

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  14. A highly accurate method to solve Fisher’s equation

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  15. Accurate Method for Determining Adhesion of Cantilever Beams

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  16. A robust and accurate formulation of molecular and colloidal electrostatics

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  17. Robust Small Sample Accurate Inference in Moment Condition Models

    Serigne N. Lo; Elvezio Ronchetti

    2006-01-01

    Procedures based on the Generalized Method of Moments (GMM) (Hansen, 1982) are basic tools in modern econometrics. In most cases, the theory available for making inference with these procedures is based on first order asymptotic theory. It is well-known that the (first order) asymptotic distribution does not provide accurate p-values and confidence intervals in moderate to small samples. Moreover, in the presence of small deviations from the assumed model, p-values and confidence intervals ba...

  18. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  19. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry

  20. Can blind persons accurately assess body size from the voice?

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  1. An accurate determination of the flux within a slab

    During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available

  2. Accurate pose estimation using single marker single camera calibration system

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  3. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  4. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  5. Interacting with image hierarchies for fast and accurate object segmentation

    Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark

    1994-05-01

    Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.

  6. Can clinicians accurately assess esophageal dilation without fluoroscopy?

    Bailey, A D; Goldner, F

    1990-01-01

    This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278

  7. Flexible and accurate detection of genomic copy-number changes from aCGH.

    Oscar M Rueda

    2007-06-01

    Full Text Available Genomic DNA copy-number alterations (CNAs are associated with complex diseases, including cancer: CNAs are indeed related to tumoral grade, metastasis, and patient survival. CNAs discovered from array-based comparative genomic hybridization (aCGH data have been instrumental in identifying disease-related genes and potential therapeutic targets. To be immediately useful in both clinical and basic research scenarios, aCGH data analysis requires accurate methods that do not impose unrealistic biological assumptions and that provide direct answers to the key question, "What is the probability that this gene/region has CNAs?" Current approaches fail, however, to meet these requirements. Here, we introduce reversible jump aCGH (RJaCGH, a new method for identifying CNAs from aCGH; we use a nonhomogeneous hidden Markov model fitted via reversible jump Markov chain Monte Carlo; and we incorporate model uncertainty through Bayesian model averaging. RJaCGH provides an estimate of the probability that a gene/region has CNAs while incorporating interprobe distance and the capability to analyze data on a chromosome or genome-wide basis. RJaCGH outperforms alternative methods, and the performance difference is even larger with noisy data and highly variable interprobe distance, both commonly found features in aCGH data. Furthermore, our probabilistic method allows us to identify minimal common regions of CNAs among samples and can be extended to incorporate expression data. In summary, we provide a rigorous statistical framework for locating genes and chromosomal regions with CNAs with potential applications to cancer and other complex human diseases.

  8. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  9. Selection of low-variance expressed Malus x domestica (apple) genes for use as quantitative PCR reference genes (housekeepers)

    To accurately measure gene expression using PCR-based approaches, there is the need for reference genes that have low variance in expression (housekeeping genes) to normalise the data for RNA quantity and quality. For non-model species such as Malus x domestica (apples), previously, the selection of...

  10. Gene expression

    We prepared probes for isolating functional pieces of the metallothionein locus. The probes enabled a variety of experiments, eventually revealing two mechanisms for metallothionein gene expression, the order of the DNA coding units at the locus, and the location of the gene site in its chromosome. Once the switch regulating metallothionein synthesis was located, it could be joined by recombinant DNA methods to other, unrelated genes, then reintroduced into cells by gene-transfer techniques. The expression of these recombinant genes could then be induced by exposing the cells to Zn2+ or Cd2+. We would thus take advantage of the clearly defined switching properties of the metallothionein gene to manipulate the expression of other, perhaps normally constitutive, genes. Already, despite an incomplete understanding of how the regulatory switch of the metallothionein locus operates, such experiments have been performed successfully

  11. Accurate LAI retrieval method based on PROBA/CHRIS data

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  12. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  13. Fast and accurate estimation for astrophysical problems in large databases

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  14. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  15. An accurate RLGC circuit model for dual tapered TSV structure

    A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)

  16. Accurate strand-specific quantification of viral RNA.

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  17. Accurately Determining the Risks of Rising Sea Level

    Marbaix, Philippe; Nicholls, Robert J.

    2007-10-01

    With the highest density of people and the greatest concentration of economic activity located in the coastal regions, sea level rise is an important concern as the climate continues to warm. Subsequent flooding may potentially disrupt industries, populations, and livelihoods, particularly in the long term if the climate is not quickly stabilized [McGranahan et al., 2007; Tol et al., 2006]. To help policy makers understand these risks, a more accurate description of hazards posed by rising sea levels is needed at the global scale, even though the impacts in specific regions are better known.

  18. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  19. Accurate analysis of EBSD data for phase identification

    Palizdar, Y; Cochrane, R C; Brydson, R; Leary, R; Scott, A J, E-mail: preyp@leeds.ac.u [Institute for Materials Research, University of Leeds, Leeds LS2 9JT UK (United Kingdom)

    2010-07-01

    This paper aims to investigate the reliability of software default settings in the analysis of EBSD results. To study the effect of software settings on the EBSD results, the presence of different phases in high Al steel has been investigated by EBSD. The results show the importance of appropriate automated analysis parameters for valid and reliable phase discrimination. Specifically, the importance of the minimum number of indexed bands and the maximum solution error have been investigated with values of 7-9 and 1.0-1.5{sup 0} respectively, found to be needed for accurate analysis.

  20. Accurate characterization of OPVs: Device masking and different solar simulators

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  1. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Robinson, David

    2014-12-01

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218

  2. Accurate method of modeling cluster scaling relations in modified gravity

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  3. Accurate Programming: Thinking about programs in terms of properties

    Walid Taha

    2011-09-01

    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  4. Accurate studies on dissociation energies of diatomic molecules

    SUN; WeiGuo; FAN; QunChao

    2007-01-01

    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  5. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented

  6. Accurate laboratory boresight alignment of transmitter/receiver optical axes

    Martinek, Stephen J.

    1986-01-01

    An apparatus and procedure for the boresight alignment of the transmitter and receiver optical axes of a laser radar system are described. This accurate technique is applicable to both shared and dual aperture systems. A laser autostigmatic cube interferometer (LACI) is utilized to align a paraboloid in autocollimation. The LACI pinhole located at the paraboloid center of curvature becomes the far field receiver track and transmit reference point when illuminated by the transmit beam via a fiber optic pick-off/delay line. Boresight alignment accuracy better than 20 microrad is achievable.

  7. Fast and accurate methods of independent component analysis: A survey

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438. ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  8. Simple, Accurate, and Robust Nonparametric Blind Super-Resolution

    Shao, Wen-Ze; Elad, Michael

    2015-01-01

    This paper proposes a simple, accurate, and robust approach to single image nonparametric blind Super-Resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed approach includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the unnatural bi-l0-l2-norm regularization imposed...

  9. Accurate Image Super-Resolution Using Very Deep Convolutional Networks

    Kim, Jiwon; Lee, Jung Kwon; Lee, Kyoung Mu

    2015-01-01

    We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification \\cite{simonyan2015very}. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, ho...

  10. Optimized pulse sequences for the accurate measurement of aortic compliance

    Aortic compliance is potentially an important cardiovascular diagnostic parameter by virtue of a proposed correlation with cardiovascular fitness. Measurement requires cross-sectional images of the ascending and descending aorta in systole and diastole for measurement of aortic lumen areas. Diastolic images have poor vessel- wall delineation due to signal from slow-flowing blood. A comparison has been carried out using presaturation (SAT) RF pulses, transparent RF pulses, and flow-compensated gradients in standard pulse sequences to improve vessel-wall delineation in diastole. Properly timed SAT pulses provide the most consistent vessel-wall delineation and the most accurate measurement of aortic compliance

  11. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    Matthew Almond Sochor

    2014-07-01

    Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.

  12. On the relation between gene flow theory and genetic gain

    Woolliams John A

    2000-01-01

    Full Text Available Abstract In conventional gene flow theory the rate of genetic gain is calculated as the summed products of genetic selection differential and asymptotic proportion of genes deriving from sex-age groups. Recent studies have shown that asymptotic proportions of genes predicted from conventional gene flow theory may deviate considerably from true proportions. However, the rate of genetic gain predicted from conventional gene flow theory was accurate. The current note shows that the connection between asymptotic proportions of genes and rate of genetic gain that is embodied in conventional gene flow theory is invalid, even though genetic gain may be predicted correctly from it.

  13. Evidence based selection of housekeeping genes.

    Hendrik J M de Jonge

    Full Text Available For accurate and reliable gene expression analysis, normalization of gene expression data against housekeeping genes (reference or internal control genes is required. It is known that commonly used housekeeping genes (e.g. ACTB, GAPDH, HPRT1, and B2M vary considerably under different experimental conditions and therefore their use for normalization is limited. We performed a meta-analysis of 13,629 human gene array samples in order to identify the most stable expressed genes. Here we show novel candidate housekeeping genes (e.g. RPS13, RPL27, RPS20 and OAZ1 with enhanced stability among a multitude of different cell types and varying experimental conditions. None of the commonly used housekeeping genes were present in the top 50 of the most stable expressed genes. In addition, using 2,543 diverse mouse gene array samples we were able to confirm the enhanced stability of the candidate novel housekeeping genes in another mammalian species. Therefore, the identified novel candidate housekeeping genes seem to be the most appropriate choice for normalizing gene expression data.

  14. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434

  15. Canadian consumer issues in accurate and fair electricity metering

    The Public Interest Advocacy Centre (PIAC), located in Ottawa, participates in regulatory proceedings concerning electricity and natural gas to support public and consumer interest. PIAC provides legal representation, research and policy support and public advocacy. A study aimed toward the determination of the issues at stake for residential electricity consumers in the provision of fair and accurate electricity metering, was commissioned by Measurement Canada in consultation with Industry Canada's Consumer Affairs. The metering of electricity must be carried out in a fair and efficient manner for all residential consumers. The Electricity, Gas and Inspection Act was developed to ensure compliance with standards for measuring instrumentation. The accurate metering of electricity through the distribution systems for electricity in Canada represents the main focus of this study and report. The role played by Measurement Canada and the increased efficiencies of service delivery by Measurement Canada or the changing of electricity market conditions are of special interest. The role of Measurement Canada was explained, as were the concerns of residential consumers. A comparison was then made between the interests of residential consumers and those of commercial and industrial electricity consumers in electricity metering. Selected American and Commonwealth jurisdictions were reviewed in light of their electricity metering practices. A section on compliance and conflict resolution was included, in addition to a section on the use of voluntary codes for compliance and conflict resolution

  16. How accurately can 21cm tomography constrain cosmology?

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  17. Accurate measurement of streamwise vortices using dual-plane PIV

    Waldman, Rye M.; Breuer, Kenneth S. [Brown University, School of Engineering, Providence, RI (United States)

    2012-11-15

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers. (orig.)

  18. Accurate 3D quantification of the bronchial parameters in MDCT

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  19. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  20. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  1. The economic value of accurate wind power forecasting to utilities

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  2. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm-1). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10-9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α-1 = 137.03599884 (91) with a relative uncertainty of 6.7*10-9. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  3. KFM: a homemade yet accurate and dependable fallout meter

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  4. Accurate location estimation of moving object In Wireless Sensor network

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  5. Accurate interlaminar stress recovery from finite element analysis

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  6. Accurate rest frequencies of methanol maser and dark cloud lines

    Müller, H S P; Maeder, H

    2004-01-01

    We report accurate laboratory measurements of selected methanol transition frequencies between 0.834 and 230 GHz in order to facilitate astronomical velocity analyses. New data have been obtained between 10 and 27 GHz and between 60 and 119 GHz. Emphasis has been put on known or potential interstellar maser lines as well as on transitions suitable for the investigation of cold dark clouds. Because of the narrow line widths (<0.5 kms-1) of maser lines and lines detected in dark molecular clouds, accurate frequencies are needed for comparison of the velocities of different methanol lines with each other as well as with lines from other species. In particular, frequencies for a comprehensive set of transitions are given which, because of their low level energies (< 20 cm-1 or 30 K) are potentially detectable in cold clouds. Global Hamiltonian fits generally do not yet yield the required accuracy. Additionally, we report transition frequencies for other lines that may be used to test and to improve existing...

  7. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  8. Ultrasound mediated gene transfection

    Williamson, Rene G.; Apfel, Robert E.; Brandsma, Janet L.

    2002-05-01

    Gene therapy is a promising modality for the treatment of a variety of human diseases both inherited and acquired, such as cystic fibrosis and cancer. The lack of an effective, safe method for the delivery of foreign genes into the cells, a process known as transfection, limits this effort. Ultrasound mediated gene transfection is an attractive method for gene delivery since it is a noninvasive technique, does not introduce any viral particles into the host and can offer very good temporal and spatial control. Previous investigators have shown that sonication increases transfection efficiency with and without ultrasound contrast agents. The mechanism is believed to be via a cavitation process where collapsing bubble nuclei permeabilize the cell membrane leading to increased DNA transfer. The research is focused on the use of pulsed wave high frequency focused ultrasound to transfect DNA into mammalian cells in vitro and in vivo. A better understanding of the mechanism behind the transfection process is also sought. A summary of some in vitro results to date will be presented, which includes the design of a sonication chamber that allows us to model the in vivo case more accurately.

  9. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  10. Accurate Measurement of the Effects of All Amino-Acid Mutations on Influenza Hemagglutinin

    Doud, Michael B.; Bloom, Jesse D.

    2016-01-01

    Influenza genes evolve mostly via point mutations, and so knowing the effect of every amino-acid mutation provides information about evolutionary paths available to the virus. We and others have combined high-throughput mutagenesis with deep sequencing to estimate the effects of large numbers of mutations to influenza genes. However, these measurements have suffered from substantial experimental noise due to a variety of technical problems, the most prominent of which is bottlenecking during the generation of mutant viruses from plasmids. Here we describe advances that ameliorate these problems, enabling us to measure with greatly improved accuracy and reproducibility the effects of all amino-acid mutations to an H1 influenza hemagglutinin on viral replication in cell culture. The largest improvements come from using a helper virus to reduce bottlenecks when generating viruses from plasmids. Our measurements confirm at much higher resolution the results of previous studies suggesting that antigenic sites on the globular head of hemagglutinin are highly tolerant of mutations. We also show that other regions of hemagglutinin—including the stalk epitopes targeted by broadly neutralizing antibodies—have a much lower inherent capacity to tolerate point mutations. The ability to accurately measure the effects of all influenza mutations should enhance efforts to understand and predict viral evolution. PMID:27271655

  11. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Sun Yanni

    2011-05-01

    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.

  12. Gene therapy

    2005-01-01

    2005147 CNHK200-hA-a gene-viral therapeutic system and its antitumor effect on lung cancer. WANG Wei-guo(王伟国),et al. Viral & Gene Ther Center, Eastern Hepatobilli Surg Instit 2nd Milit Univ, Shanghai 200438. Chin J Oncol,2005:27(2):69-72. Objective: To develop a novel vector system, which combines the advantages of the gene therapy,

  13. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  14. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  15. A new accurate pill recognition system using imprint information

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  16. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  17. Analytical method to accurately predict LMFBR core flow distribution

    An accurate and detailed representation of the flow distribution in LMFBR cores is very important as the starting point and basis of the thermal and structural core design. Previous experience indicated that the steady state and transient core design is as good as the core orificing; thus, a new orificing philosophy satisfying a priori all design constraints was developd. However, optimized orificing is a necessary, but not sufficient condition for achieving the optimum core flow distribution, which is affected by the hydraulic characteristics of the remainder of the primary system. Consequently, an analytical model of the overall primary system was developed, resulting in the CATFISH computer code, which, even though specifically written for LMFBRs, can be used for any reactor employing ducted assemblies

  18. Accurate performance analysis of opportunistic decode-and-forward relaying

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  19. Accurate numerical solution of compressible, linear stability equations

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  20. Accurate volume measurement system for plutonium nitrate solution

    An accurate volume measurement system for a large amount of plutonium nitrate solution stored in a reprocessing or a conversion plant has been developed at the Plutonium Conversion Development Facility (PCDF) in the Power Reactor and Nuclear Fuel Development Corp. (PNC) Tokai Works. A pair of differential digital quartz pressure transducers is utilized in the volume measurement system. To obtain high accuracy, it is important that the non-linearity of the transducer is minimized within the measurement range, the zero point is stabilized, and the damping property of the pneumatic line is designed to minimize pressure oscillation. The accuracy of the pressure measurement can always be within 2Pa with re-calibration once a year. In the PCDF, the overall uncertainty of the volume measurement has been evaluated to be within 0.2 %. This system has been successfully applied to the Japanese government's and IAEA's routine inspection since 1984. (author)

  1. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  2. Efficient and Accurate Indoor Localization Using Landmark Graphs

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  3. An Integrative Approach to Accurate Vehicle Logo Detection

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  4. Accurate characterisation of post moulding shrinkage of polymer parts

    Neves, L. C.; De Chiffre, L.; González-Madruga, D.;

    2015-01-01

    The work deals with experimental determination of the shrinkage of polymer parts after injection moulding. A fixture for length measurements on 8 parts at the same time was designed and manufactured in Invar, mounted with 8 electronic gauges, and provided with 3 temperature sensors. The fixture was...... used to record the length at a well-defined position on each part continuously, starting from approximately 10 minutes after moulding and covering a time period of 7 days. Two series of shrinkage curves were analysed and length values after stabilisation extracted and compared for all 16 parts. Values...... were compensated with respect to the effect from temperature variations during the measurements. Prediction of the length after stabilisation was carried out by fitting data at different stages of shrinkage. Uncertainty estimations were carried out and a procedure for the accurate characterisation of...

  5. Accurate finite difference methods for time-harmonic wave propagation

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  6. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain model...... of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  7. Accurate Parallel Algorithm for Adini Nonconforming Finite Element

    罗平; 周爱辉

    2003-01-01

    Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.

  8. Accurate object tracking system by integrating texture and depth cues

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  9. Accurate Derivative Evaluation for any Grad-Shafranov Solver

    Ricketson, L F; Rachh, M; Freidberg, J P

    2015-01-01

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  10. Accurate derivative evaluation for any Grad-Shafranov solver

    Ricketson, L. F.; Cerfon, A. J.; Rachh, M.; Freidberg, J. P.

    2016-01-01

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  11. Accurate monitoring of large aligned objects with videometric techniques

    Klumb, F; Grussenmeyer, P

    1999-01-01

    This paper describes a new videometric technique designed to monitor the deformations and misalignments of large vital components in the centre of a future particle detector. It relies on a geometrical principle called "reciprocal collimation" of two CCD cameras: the combination of the video devices in pair gives rise to a network of well located reference lines that surround the object to be surveyed. Each observed point, which in practice is a bright point-like light- source, is accurately located with respect to this network of neighbouring axes. Adjustment calculations provide the three- dimensional position of the object fitted with various light-sources. An experimental test-bench, equipped with four cameras, has corroborated the precision predicted by previous simulations of the system. (11 refs).

  12. Methods for Accurate Free Flight Measurement of Drag Coefficients

    Courtney, Elya; Courtney, Michael

    2015-01-01

    This paper describes experimental methods for free flight measurement of drag coefficients to an accuracy of approximately 1%. There are two main methods of determining free flight drag coefficients, or equivalent ballistic coefficients: 1) measuring near and far velocities over a known distance and 2) measuring a near velocity and time of flight over a known distance. Atmospheric conditions must also be known and nearly constant over the flight path. A number of tradeoffs are important when designing experiments to accurately determine drag coefficients. The flight distance must be large enough so that the projectile's loss of velocity is significant compared with its initial velocity and much larger than the uncertainty in the near and/or far velocity measurements. On the other hand, since drag coefficients and ballistic coefficients both depend on velocity, the change in velocity over the flight path should be small enough that the average drag coefficient over the path (which is what is really determined)...

  13. Natural orbital expansions of highly accurate three-body wavefunctions

    Natural orbital expansions are considered for highly accurate three-body wavefunctions written in the relative coordinates r32, r31 and r21. Our present method is applied to the ground S(L = 0) -state wavefunctions of the Ps- and inftyH- ions. Our best variational energies computed herein for these systems are E(Ps-) = -0.262 005 070 232 980 107 7666 au and E(inftyH- =-0.5277510165443771965865 au, respectively. The variational wavefunctions determined for these systems contain between 2000 and 4200 exponential basis functions. In general, the natural orbital expansions of these functions are compact and rapidly convergent functions, which are represented as linear combinations of some relatively simple functions. The natural orbitals can be very useful in various applications, including photodetachment and scattering problems

  14. Fast and accurate automated cell boundary determination for fluorescence microscopy

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  15. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  16. Accurate macroscale modelling of spatial dynamics in multiple dimensions

    Roberts, A ~J; Bunder, J ~E

    2011-01-01

    Developments in dynamical systems theory provides new support for the macroscale modelling of pdes and other microscale systems such as Lattice Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically resolving subgrid microscale dynamics the dynamical systems approach constructs accurate closures of macroscale discretisations of the microscale system. Here we specifically explore reaction-diffusion problems in two spatial dimensions as a prototype of generic systems in multiple dimensions. Our approach unifies into one the modelling of systems by a type of finite elements, and the `equation free' macroscale modelling of microscale simulators efficiently executing only on small patches of the spatial domain. Centre manifold theory ensures that a closed model exist on the macroscale grid, is emergent, and is systematically approximated. Dividing space either into overlapping finite elements or into spatially separated small patches, the specially crafted inter-element\\slash patch coupling als...

  17. How accurately can digital images depict conventional radiographs

    The purpose of this paper is to investigate how accurately the video image of a digitized chest radiograph can depict normal anatomic configurations of thoracic organs seen on a conventional radiograph. These configurations are important to diagnosis of diseases of the chest. Chest radiographs of 50 individuals diagnosed as normal were analyzed. Three chest physicians and one radiologist reviewed 50 pairs of digitized images (digitized in 0.125-mm pixel size, 10-bit gray scale, displayed on 1,024 x 1.536, 8-bit gray scale) constructed and conventional films. The visibility of eight structures (spinal process, trachea, right and left main bronchus, anterior tip of right fourth rib, vessels behind diaphragm and cardiac shadow, and descending aorta behind heart) was graded into five levels of confidence

  18. MRI Reporter Genes for Noninvasive Molecular Imaging

    Caixia Yang

    2016-05-01

    Full Text Available Magnetic resonance imaging (MRI is one of the most important imaging technologies used in clinical diagnosis. Reporter genes for MRI can be applied to accurately track the delivery of cell in cell therapy, evaluate the therapy effect of gene delivery, and monitor tissue/cell-specific microenvironments. Commonly used reporter genes for MRI usually include genes encoding the enzyme (e.g., tyrosinase and β-galactosidase, the receptor on the cells (e.g., transferrin receptor, and endogenous reporter genes (e.g., ferritin reporter gene. However, low sensitivity limits the application of MRI and reporter gene-based multimodal imaging strategies are common including optical imaging and radionuclide imaging. These can significantly improve diagnostic efficiency and accelerate the development of new therapies.

  19. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  20. Can a surgeon drill accurately at a specified angle?

    Brioschi, Valentina; Cook, Jodie; Arthurs, Gareth I

    2016-01-01

    Objectives To investigate whether a surgeon can drill accurately a specified angle and whether surgeon experience, task repetition, drill bit size and perceived difficulty influence drilling angle accuracy. Methods The sample population consisted of final-year students (n=25), non-specialist veterinarians (n=22) and board-certified orthopaedic surgeons (n=8). Each participant drilled a hole twice in a horizontal oak plank at 30°, 45°, 60°, 80°, 85° and 90° angles with either a 2.5  or a 3.5 mm drill bit. Participants then rated the perceived difficulty to drill each angle. The true angle of each hole was measured using a digital goniometer. Results Greater drilling accuracy was achieved at angles closer to 90°. An error of ≤±4° was achieved by 84.5 per cent of participants drilling a 90° angle compared with approximately 20 per cent of participants drilling a 30–45° angle. There was no effect of surgeon experience, task repetition or drill bit size on the mean error for intended versus achieved angle. Increased perception of difficulty was associated with the more acute angles and decreased accuracy, but not experience level. Clinical significance This study shows that surgeon ability to drill accurately (within ±4° error) is limited, particularly at angles ≤60°. In situations where drill angle is critical, use of computer-assisted navigation or custom-made drill guides may be preferable. PMID:27547423

  1. Bayesian calibration of power plant models for accurate performance prediction

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  2. Population variability complicates the accurate detection of climate change responses.

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. PMID:26725404

  3. An accurate and portable solid state neutron rem meter

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare 252Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates

  4. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  5. Gene therapy.

    Mota Biosca, Anna

    1992-01-01

    Applications of gene therapy have been evaluated in virtually every oral tissue, and many of these have proved successful at least in animal models. While gene therapy will not be used routinely in the next decade, practitioners of oral medicine should be aware of the potential of this novel type of treatment that doubtless will benefit many patients with oral diseases.

  6. Trichoderma genes

    Foreman, Pamela; Goedegebuur, Frits; Van Solingen, Pieter; Ward, Michael

    2012-06-19

    Described herein are novel gene sequences isolated from Trichoderma reesei. Two genes encoding proteins comprising a cellulose binding domain, one encoding an arabionfuranosidase and one encoding an acetylxylanesterase are described. The sequences, CIP1 and CIP2, contain a cellulose binding domain. These proteins are especially useful in the textile and detergent industry and in pulp and paper industry.

  7. Improving gene expression similarity measurement using pathway-based analytic dimension

    2009-01-01

    Background Gene expression similarity measuring methods were developed and applied to search rapidly growing public microarray databases. However, current expression similarity measuring methods need to be improved to accurately measure similarity between gene expression profiles from different platforms or different experiments. Results We devised new gene expression similarity measuring method based on pathway information. In short, newly devised method measure similarity between gene expre...

  8. Validation of reference genes for gene expression analysis in chicory (Cichorium intybus) using quantitative real-time PCR

    Van Bockstaele Erik; Maroufi Asad; De Loose Marc

    2010-01-01

    Abstract Background Quantitative real-time reverse transcriptase polymerase chain reaction (qRT-PCR) is a sensitive technique for quantifying gene expression levels. One or more appropriate reference genes must be selected to accurately compare mRNA transcripts across different samples and tissues. Thus far, only actin-2 has been used as a reference gene for qRT-PCR in chicory, and a full comparison of several candidate reference genes in chicory has not yet been reported. Results Seven candi...

  9. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  10. A Complete and Accurate Ab Initio Repeat Finding Algorithm.

    Lian, Shuaibin; Chen, Xinwu; Wang, Peng; Zhang, Xiaoli; Dai, Xianhua

    2016-03-01

    It has become clear that repetitive sequences have played multiple roles in eukaryotic genome evolution including increasing genetic diversity through mutation, changes in gene expression and facilitating generation of novel genes. However, identification of repetitive elements can be difficult in the ab initio manner. Currently, some classical ab initio tools of finding repeats have already presented and compared. The completeness and accuracy of detecting repeats of them are little pool. To this end, we proposed a new ab initio repeat finding tool, named HashRepeatFinder, which is based on hash index and word counting. Furthermore, we assessed the performances of HashRepeatFinder with other two famous tools, such as RepeatScout and Repeatfinder, in human genome data hg19. The results indicated the following three conclusions: (1) The completeness of HashRepeatFinder is the best one among these three compared tools in almost all chromosomes, especially in chr9 (8 times of RepeatScout, 10 times of Repeatfinder); (2) in terms of detecting large repeats, HashRepeatFinder also performed best in all chromosomes, especially in chr3 (24 times of RepeatScout and 250 times of Repeatfinder) and chr19 (12 times of RepeatScout and 60 times of Repeatfinder); (3) in terms of accuracy, HashRepeatFinder can merge the abundant repeats with high accuracy. PMID:26272474

  11. Using Ontology Fingerprints to disambiguate gene name entities in the biomedical literature

    Chen, Guocai; Zhao, Jieyi; Cohen, Trevor; Tao, Cui; Sun, Jingchun; Xu, Hua; Bernstam, Elmer V.; Lawson, Andrew; Zeng, Jia; Johnson, Amber M.; Holla, Vijaykumar; Bailey, Ann M.; Lara-Guerra, Humberto; Litzenburger, Beate; Meric-Bernstam, Funda

    2015-01-01

    Ambiguous gene names in the biomedical literature are a barrier to accurate information extraction. To overcome this hurdle, we generated Ontology Fingerprints for selected genes that are relevant for personalized cancer therapy. These Ontology Fingerprints were used to evaluate the association between genes and biomedical literature to disambiguate gene names. We obtained 93.6% precision for the test gene set and 80.4% for the area under a receiver-operating characteristics curve for gene an...

  12. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable

  13. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  14. Accurate in-line CD metrology for nanometer semiconductor manufacturing

    Perng, Baw-Ching; Shieh, Jyu-Horng; Jang, S.-M.; Liang, M.-S.; Huang, Renee; Chen, Li-Chien; Hwang, Ruey-Lian; Hsu, Joe; Fong, David

    2006-03-01

    The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies. While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness (LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material. Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement. Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to 400nm. CD measurements of poly ADI and low k trenches are also

  15. Passive samplers accurately predict PAH levels in resident crayfish.

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4±1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  16. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  17. Automatic classification and accurate size measurement of blank mask defects

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  18. Accurate Complex Systems Design: Integrating Serious Games with Petri Nets

    Kirsten Sinclair

    2016-03-01

    Full Text Available Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   

  19. An accurate δf method for neoclassical transport calculation

    A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  20. Progress in Fast, Accurate Multi-scale Climate Simulations

    Collins, William D [Lawrence Berkeley National Laboratory (LBNL); Johansen, Hans [Lawrence Berkeley National Laboratory (LBNL); Evans, Katherine J [ORNL; Woodward, Carol S. [Lawrence Livermore National Laboratory (LLNL); Caldwell, Peter [Lawrence Livermore National Laboratory (LLNL)

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  1. Faster and More Accurate Sequence Alignment with SNAP

    Zaharia, Matei; Curtis, Kristal; Fox, Armando; Patterson, David; Shenker, Scott; Stoica, Ion; Karp, Richard M; Sittler, Taylor

    2011-01-01

    We present the Scalable Nucleotide Alignment Program (SNAP), a new short and long read aligner that is both more accurate (i.e., aligns more reads with fewer errors) and 10-100x faster than state-of-the-art tools such as BWA. Unlike recent aligners based on the Burrows-Wheeler transform, SNAP uses a simple hash index of short seed sequences from the genome, similar to BLAST's. However, SNAP greatly reduces the number and cost of local alignment checks performed through several measures: it uses longer seeds to reduce the false positive locations considered, leverages larger memory capacities to speed index lookup, and excludes most candidate locations without fully computing their edit distance to the read. The result is an algorithm that scales well for reads from one hundred to thousands of bases long and provides a rich error model that can match classes of mutations (e.g., longer indels) that today's fast aligners ignore. We calculate that SNAP can align a dataset with 30x coverage of a human genome in le...

  2. Accurate Detection of Rifampicin-Resistant Mycobacterium Tuberculosis Strains

    Keum-Soo Song

    2016-03-01

    Full Text Available In 2013 alone, the death rate among the 9.0 million people infected with Mycobacterium tuberculosis (TB worldwide was around 14%, which is unacceptably high. An empiric treatment of patients infected with TB or drug-resistant Mycobacterium tuberculosis (MDR-TB strain can also result in the spread of MDR-TB. The diagnostic tools which are rapid, reliable, and have simple experimental protocols can significantly help in decreasing the prevalence rate of MDR-TB strain. We report the evaluation of the 9G technology based 9G DNAChips that allow accurate detection and discrimination of TB and MDR-TB-RIF. One hundred and thirteen known cultured samples were used to evaluate the ability of 9G DNAChip in the detection and discrimination of TB and MDR-TB-RIF strains. Hybridization of immobilized probes with the PCR products of TB and MDR-TB-RIF strains allow their detection and discrimination. The accuracy of 9G DNAChip was determined by comparing its results with sequencing analysis and drug susceptibility testing. Sequencing analysis showed 100% agreement with the results of 9G DNAChip. The 9G DNAChip showed very high sensitivity (95.4% and specificity (100%.

  3. The place of highly accurate methods by RNAA in metrology

    With the introduction of physical metrological concepts to chemical analysis which require that the result should be accompanied by uncertainty statement written down in terms of Sl units, several researchers started to consider lD-MS as the only method fulfilling this requirement. However, recent publications revealed that in certain cases also some expert laboratories using lD-MS and analyzing the same material, produced results for which their uncertainty statements did not overlap, what theoretically should not have taken place. This shows that no monopoly is good in science and it would be desirable to widen the set of methods acknowledged as primary in inorganic trace analysis. Moreover, lD-MS cannot be used for monoisotopic elements. The need for searching for other methods having similar metrological quality as the lD-MS seems obvious. In this paper, our long-time experience on devising highly accurate ('definitive') methods by RNAA for the determination of selected trace elements in biological materials is reviewed. The general idea of definitive methods based on combination of neutron activation with the highly selective and quantitative isolation of the indicator radionuclide by column chromatography followed by gamma spectrometric measurement is reminded and illustrated by examples of the performance of such methods when determining Cd, Co, Mo, etc. lt is demonstrated that such methods are able to provide very reliable results with very low levels of uncertainty traceable to Sl units

  4. Accurate quantification of cells recovered by bronchoalveolar lavage.

    Saltini, C; Hance, A J; Ferrans, V J; Basset, F; Bitterman, P B; Crystal, R G

    1984-10-01

    Quantification of the differential cell count and total number of cells recovered from the lower respiratory tract by bronchoalveolar lavage is a valuable technique for evaluating the alveolitis of patients with inflammatory disorders of the lower respiratory tract. The most commonly used technique for the evaluation of cells recovered by lavage has been to concentrate cells by centrifugation and then to determine total cell number using a hemocytometer and differential cell count from a Wright-Glemsa-stained cytocentrifuge preparation. However, we have noted that the percentage of small cells present in the original cell suspension recovered by lavage is greater than the percentage of lymphocytes identified on cytocentrifuge preparations. Therefore, we developed procedures for determining differential cell counts on lavage cells collected on Millipore filters and stained with hematoxylin-eosin (filter preparations) and compared the results of differential cell counts performed on filter preparations with those obtained using cytocentrifuge preparations. When cells recovered by lavage were collected on filter preparations, accurate differential cell counts were obtained, as confirmed by performing differential cell counts on cell mixtures of known composition, and by comparing differential cell counts obtained using filter preparations stained with hematoxylin-eosin with those obtained using filter preparations stained with a peroxidase cytochemical stain. The morphology of cells displayed on filter preparations was excellent, and interobserver variability in quantitating cell types recovered by lavage was less than 3%.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6385789

  5. Accurate measurement of liquid transport through nanoscale conduits.

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-01-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404

  6. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  7. Study of accurate volume measurement system for plutonium nitrate solution

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  8. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    Miao Liu

    2015-10-01

    Full Text Available In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system.

  9. A New Path Generation Algorithm Based on Accurate NURBS Curves

    Sawssen Jalel

    2016-04-01

    Full Text Available The process of finding an optimum, smooth and feasible global path for mobile robot navigation usually involves determining the shortest polyline path, which will be subsequently smoothed to satisfy the requirements. Within this context, this paper deals with a novel roadmap algorithm for generating an optimal path in terms of Non-Uniform Rational B-Splines (NURBS curves. The generated path is well constrained within the curvature limit by exploiting the influence of the weight parameter of NURBS and/or the control points’ locations. The novelty of this paper lies in the fact that NURBS curves are not used only as a means of smoothing, but they are also involved in meeting the system’s constraints via a suitable parameterization of the weights and locations of control points. The accurate parameterization of weights allows for a greater benefit to be derived from the influence and geometrical effect of this factor, which has not been well investigated in previous works. The effectiveness of the proposed algorithm is demonstrated through extensive MATLAB computer simulations.

  10. Accurate ab initio vibrational energies of methyl chloride

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH335Cl and CH337Cl. The respective PESs, CBS-35 HL, and CBS-37 HL, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35 HL and CBS-37 HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm−1, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH3Cl without empirical refinement of the respective PESs

  11. Accurate ab initio vibrational energies of methyl chloride

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  12. An accurate {delta}f method for neoclassical transport calculation

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1999-03-01

    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  13. Iterative feature refinement for accurate undersampled MR image reconstruction

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  14. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  15. Reusable, robust, and accurate laser-generated photonic nanosensor.

    Yetisen, Ali K; Montelongo, Yunuen; da Cruz Vasconcellos, Fernando; Martinez-Hurtado, J L; Neupane, Sankalpa; Butt, Haider; Qasim, Malik M; Blyth, Jeffrey; Burling, Keith; Carmody, J Bryan; Evans, Mark; Wilkinson, Timothy D; Kubota, Lauro T; Monteiro, Michael J; Lowe, Christopher R

    2014-06-11

    Developing noninvasive and accurate diagnostics that are easily manufactured, robust, and reusable will provide monitoring of high-risk individuals in any clinical or point-of-care environment. We have developed a clinically relevant optical glucose nanosensor that can be reused at least 400 times without a compromise in accuracy. The use of a single 6 ns laser (λ = 532 nm, 200 mJ) pulse rapidly produced off-axis Bragg diffraction gratings consisting of ordered silver nanoparticles embedded within a phenylboronic acid-functionalized hydrogel. This sensor exhibited reversible large wavelength shifts and diffracted the spectrum of narrow-band light over the wavelength range λpeak ≈ 510-1100 nm. The experimental sensitivity of the sensor permits diagnosis of glucosuria in the urine samples of diabetic patients with an improved performance compared to commercial high-throughput urinalysis devices. The sensor response was achieved within 5 min, reset to baseline in ∼10 s. It is anticipated that this sensing platform will have implications for the development of reusable, equipment-free colorimetric point-of-care diagnostic devices for diabetes screening. PMID:24844116

  16. An Accurate ANFIS-based MPPT for Solar PV System

    Ahmed Bin-Halabi

    2014-06-01

    Full Text Available It has been found from the literature review that the ANFIS-based maximum power point tracking (MPPT techniques are very fast and accurate in tracking the MPP at any weather conditions, and they have smaller power losses if trained well. Unfortunately, this is true in simulation, but in practice they do not work very well because they do not take aging of solar cells as well as the effect of dust and shading into account. In other words, the solar irradiance measured by solar irradiance sensor is not always the same irradiance that influences the PV module. The main objective of this work is to design and practically implement an MPPT system for solar PV with high speed, high efficiency, and relatively easy implementation in order to improve the efficiency of solar energy conversion. This MPPT system is based on ANFIS technique. The contribution of this research is eliminating the need of irradiance sensor while having the same adequate performance obtained by the ANFIS with irradiance sensor, both, in simulation as well as in experimental implementation. The proposed technique has been validated by comparing the practical results of the implemented setup to simulations. Experimental results have showed good agreement with simulation results.

  17. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  18. Accurate and efficient waveforms for compact binaries on eccentric orbits

    Huerta, E A; McWilliams, Sean T; O'Shaughnessy, Richard; Yunes, Nicolas

    2014-01-01

    Compact binaries that emit gravitational waves in the sensitivity band of ground-based detectors can have non-negligible eccentricities just prior to merger, depending on the formation scenario. We develop a purely analytic, frequency-domain model for gravitational waves emitted by compact binaries on orbits with small eccentricity, which reduces to the quasi-circular post-Newtonian approximant TaylorF2 at zero eccentricity and to the post-circular approximation of Yunes et al. (2009) at small eccentricity. Our model uses a spectral approximation to the (post-Newtonian) Kepler problem to model the orbital phase as a function of frequency, accounting for eccentricity effects up to ${\\cal{O}}(e^8)$ at each post-Newtonian order. Our approach accurately reproduces an alternative time-domain eccentric waveform model for eccentricities $e\\in [0, 0.4]$ and binaries with total mass less than 12 solar masses. As an application, we evaluate the signal amplitude that eccentric binaries produce in different networks of e...

  19. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  20. Accurate analysis of multicomponent fuel spray evaporation in turbulent flow

    Rauch, Bastian; Calabria, Raffaela; Chiariello, Fabio; Le Clercq, Patrick; Massoli, Patrizio; Rachner, Michael

    2012-04-01

    The aim of this paper is to perform an accurate analysis of the evaporation of single component and binary mixture fuels sprays in a hot weakly turbulent pipe flow by means of experimental measurement and numerical simulation. This gives a deeper insight into the relationship between fuel composition and spray evaporation. The turbulence intensity in the test section is equal to 10%, and the integral length scale is three orders of magnitude larger than the droplet size while the turbulence microscale (Kolmogorov scales) is of same order as the droplet diameter. The spray produced by means of a calibrated droplet generator was injected in a gas flow electrically preheated. N-nonane, isopropanol, and their mixtures were used in the tests. The generalized scattering imaging technique was applied to simultaneously determine size, velocity, and spatial location of the droplets carried by the turbulent flow in the quartz tube. The spray evaporation was computed using a Lagrangian particle solver coupled to a gas-phase solver. Computations of spray mean diameter and droplet size distributions at different locations along the pipe compare very favorably with the measurement results. This combined research tool enabled further investigation concerning the influencing parameters upon the evaporation process such as the turbulence, droplet internal mixing, and liquid-phase thermophysical properties.

  1. Accurate measurement of RF exposure from emerging wireless communication systems

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  2. Accurate reading with sequential presentation of single letters

    Nicholas Seow Chiang Price

    2012-10-01

    Full Text Available Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically-controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 words/minute and accuracies of over 90% with the single letter reading (SLR method and naive participants achieved average reading rates over 30 wpm with >90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses.

  3. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C31H47NO4, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C29H47NO3. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies

  4. How Accurate Is Pierce's Theory of Traveling Wave Tube?

    Simon, D. H.; Chernin, D.; Wong, P.; Zhang, P.; Lau, Y. Y.; Dong, C. F.; Hoff, B.; Gilgenbach, R. M.

    2015-11-01

    This paper provides a rigorous test of the accuracy of Pierce's classical theory of traveling wave tubes (TWTs). The EXACT dispersion relation for a dielectric TWT is derived, from which the spatial amplification rate, ki, is calculated. This ki is compared with that obtained from Pierce's widely used 3-wave theory and his more general 4-wave theory (which includes the reverse propagating circuit mode). We have used various procedures to extract Pierce's gain parameter C and space charge parameter Q from the exact dispersion relation. We find that, in general, the 3-wave theory is a poor representation to the exact dispersion relation if C >0.05. However, the 4-wave theory gives excellent agreement even for C as high as 0.12 and over more than 20 percent bandwidth, if the quantity (k2 × C3) is evaluated accurately as a function of frequency, and if Q is expanded to first order in the wavenumber k, where Q is the difference between the exact dispersion relation and its 4-wave representation in which Q is set to zero. Similar tests will be performed on the disk-on-rod slow wave TWT, for which the hot tube dispersion relation including all space harmonics has been obtained. Supported by AFOSR FA9550-14-1-0309, FA9550-15-1-0097, AFRL FA9451-14-1-0374, and L-3 Communications.

  5. Accurate measurement of RF exposure from emerging wireless communication systems

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  6. Data fusion for accurate microscopic rough surface metrology.

    Chen, Yuhang

    2016-06-01

    Data fusion for rough surface measurement and evaluation was analyzed on simulated datasets, one with higher density (HD) but lower accuracy and the other with lower density (LD) but higher accuracy. Experimental verifications were then performed on laser scanning microscopy (LSM) and atomic force microscopy (AFM) characterizations of surface areal roughness artifacts. The results demonstrated that the fusion based on Gaussian process models is effective and robust under different measurement biases and noise strengths. All the amplitude, height distribution, and spatial characteristics of the original sample structure can be precisely recovered, with better metrological performance than any individual measurements. As for the influencing factors, the HD noise has a relatively weaker effect as compared with the LD noise. Furthermore, to enable an accurate fusion, the ratio of LD sampling interval to surface autocorrelation length should be smaller than a critical threshold. In general, data fusion is capable of enhancing the nanometrology of rough surfaces by combining efficient LSM measurement and down-sampled fast AFM scan. The accuracy, resolution, spatial coverage and efficiency can all be significantly improved. It is thus expected to have potential applications in development of hybrid microscopy and in surface metrology. PMID:27058888

  7. Downhole temperature tool accurately measures well bore profile

    This paper reports that an inexpensive temperature tool provides accurate temperatures measurements during drilling operations for better design of cement jobs, workovers, well stimulation, and well bore hydraulics. Valid temperature data during specific wellbore operations can improve initial job design, fluid testing, and slurry placement, ultimately enhancing well bore performance. This improvement applies to cement slurries, breaker activation for slurries, breaker activation for stimulation and profile control, and fluid rheological properties for all downhole operations. The temperature tool has been run standalone mounted inside drill pipe, on slick wire line and braided cable, and as a free-falltool. It has also been run piggyback on both directional surveys (slick line and free-fall) and standard logging runs. This temperature measuring system has been used extensively in field well bores to depths of 20,000 ft. The temperature tool is completely reusable in the field, ever similar to the standard directional survey tools used on may drilling rigs. The system includes a small, rugged, programmable temperature sensor, a standard body housing, various adapters for specific applications, and a personal computer (PC) interface

  8. Accurate measurement of liquid transport through nanoscale conduits

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-04-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems.

  9. Fast and Accurate Brain Image Retrieval Using Gabor Wavelet Algorithm

    J.Esther

    2014-01-01

    Full Text Available CBIR in medical image databases are used to assist physician in diagnosis the diseases and also used to aid diagnosis by identifying similar past cases. In order to retrieve a fast, accurate and an effective similarity of images from the large data set. The pre-processing step is extraction of brain. It removes the unwanted non-brain areas like scalp, skull, neck, eyes, ear etc from the MRI Head scan images. After removing the unwanted areas of non-brain region, it is very effective to retrieve the similar images. In this paper it is proposed a brain extraction technique using fuzzy morphological operators. For the experimental results 1200 MRI images are taken from scan centre and some brain images are collected from web and these have been implemented with popular brain extraction algorithm of Graph- Cut Algorithm (GCUT and Expectation Maximization algorithm (EMA. The experiment result shows that the proposed algorithm fuzzy morphological operator algorithm (FMOA is prompting the best promising results. Using this FMOA result retrieved the brain image from the large collection of databases using Gabor-Wavelet Transform.

  10. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  11. Accurate methodology for channel bow impact on CPR

    An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an enhanced CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This is considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. The enhanced CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (orig.)

  12. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    Perley, R. A.; Butler, B. J., E-mail: RPerley@nrao.edu, E-mail: BButler@nrao.edu [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States)

    2013-02-15

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  13. Accurate calculations of bound rovibrational states for argon trimer

    Brandon, Drew; Poirier, Bill [Department of Chemistry and Biochemistry, and Department of Physics, Texas Tech University, Box 41061, Lubbock, Texas 79409-1061 (United States)

    2014-07-21

    This work presents a comprehensive quantum dynamics calculation of the bound rovibrational eigenstates of argon trimer (Ar{sub 3}), using the ScalIT suite of parallel codes. The Ar{sub 3} rovibrational energy levels are computed to a very high level of accuracy (10{sup −3} cm{sup −1} or better), and up to the highest rotational and vibrational excitations for which bound states exist. For many of these rovibrational states, wavefunctions are also computed. Rare gas clusters such as Ar{sub 3} are interesting because the interatomic interactions manifest through long-range van der Waals forces, rather than through covalent chemical bonding. As a consequence, they exhibit strong Coriolis coupling between the rotational and vibrational degrees of freedom, as well as highly delocalized states, all of which renders accurate quantum dynamical calculation difficult. Moreover, with its (comparatively) deep potential well and heavy masses, Ar{sub 3} is an especially challenging rare gas trimer case. There are a great many rovibrational eigenstates to compute, and a very high density of states. Consequently, very few previous rovibrational state calculations for Ar{sub 3} may be found in the current literature—and only for the lowest-lying rotational excitations.

  14. Accurate calculations of bound rovibrational states for argon trimer

    This work presents a comprehensive quantum dynamics calculation of the bound rovibrational eigenstates of argon trimer (Ar3), using the ScalIT suite of parallel codes. The Ar3 rovibrational energy levels are computed to a very high level of accuracy (10−3 cm−1 or better), and up to the highest rotational and vibrational excitations for which bound states exist. For many of these rovibrational states, wavefunctions are also computed. Rare gas clusters such as Ar3 are interesting because the interatomic interactions manifest through long-range van der Waals forces, rather than through covalent chemical bonding. As a consequence, they exhibit strong Coriolis coupling between the rotational and vibrational degrees of freedom, as well as highly delocalized states, all of which renders accurate quantum dynamical calculation difficult. Moreover, with its (comparatively) deep potential well and heavy masses, Ar3 is an especially challenging rare gas trimer case. There are a great many rovibrational eigenstates to compute, and a very high density of states. Consequently, very few previous rovibrational state calculations for Ar3 may be found in the current literature—and only for the lowest-lying rotational excitations

  15. Accurate measurement of oxygen consumption in children undergoing cardiac catheterization.

    Li, Jia

    2013-01-01

    Oxygen consumption (VO(2) ) is an important part of hemodynamics using the direct Fick principle in children undergoing cardiac catheterization. Accurate measurement of VO(2) is vital. Obviously, any error in the measurement of VO(2) will translate directly into an equivalent percentage under- or overestimation of blood flows and vascular resistances. It remains common practice to estimate VO(2) values from published predictive equations. Among these, the LaFarge equation is the most commonly used equation and gives the closest estimation with the least bias and limits of agreement. However, considerable errors are introduced by the LaFarge equation, particularly in children younger than 3 years of age. Respiratory mass spectrometry remains the "state-of-the-art" method, allowing highly sensitive, rapid and simultaneous measurement of multiple gas fractions. The AMIS 2000 quadrupole respiratory mass spectrometer system has been adapted to measure VO(2) in children under mechanical ventilation with pediatric ventilators during cardiac catheterization. The small sampling rate, fast response time and long tubes make the equipment a unique and powerful tool for bedside continuous measurement of VO(2) in cardiac catheterization for both clinical and research purposes. PMID:22488802

  16. The Global Geodetic Infrastructure for Accurate Monitoring of Earth Systems

    Weston, Neil; Blackwell, Juliana; Wang, Yan; Willis, Zdenka

    2014-05-01

    The National Geodetic Survey (NGS) and the Integrated Ocean Observing System (IOOS), two Program Offices within the National Ocean Service, NOAA, routinely collect, analyze and disseminate observations and products from several of the 17 critical systems identified by the U.S. Group on Earth Observations. Gravity, sea level monitoring, coastal zone and ecosystem management, geo-hazards and deformation monitoring and ocean surface vector winds are the primary Earth systems that have active research and operational programs in NGS and IOOS. These Earth systems collect terrestrial data but most rely heavily on satellite-based sensors for analyzing impacts and monitoring global change. One fundamental component necessary for monitoring via satellites is having a stable, global geodetic infrastructure where an accurate reference frame is essential for consistent data collection and geo-referencing. This contribution will focus primarily on system monitoring, coastal zone management and global reference frames and how the scientific contributions from NGS and IOOS continue to advance our understanding of the Earth and the Global Geodetic Observing System.

  17. Symphony: A Framework for Accurate and Holistic WSN Simulation

    Laurynas Riliskis

    2015-02-01

    Full Text Available Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles.

  18. Symphony: a framework for accurate and holistic WSN simulation.

    Riliskis, Laurynas; Osipov, Evgeny

    2015-01-01

    Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144

  19. An accurately fast algorithm of calculating reflection/transmission coefficients

    CASTAGNA; J; P

    2008-01-01

    For the boundary between transversely isotropic media with a vertical axis of symmetry (VTI media), the interface between a liquid and a VTI medium, and the free-surface of an elastic half-space of a VTI medium, an accurately fast algorithm was presented for calculating reflection/transmission (R/T) coefficients. Specially, the case of post-critical angle incidence was considered. Although we only performed the numerical calculation for the models of the VTI media, the calculated results can be extended to the models of transversely isotropic media with a horizontal axis of rotation symmetry (HTI media). Compared to previous work, this algorithm can be used not only for the calculation of R/T coefficients of the boundary between ellipsoidally anisotropic media, but also for that between generally anisotropic media, and the speed and accuracy of this algorithm are faster and higher. According to the anisotropic parameters of some rocks given by the published literature, we performed the calculation of R/T coefficients by using this algorithm and analyzed the effect of the rock anisotropy on R/T coefficients. We used Snell’s law and the energy balance principle to perform verification for the calculated results.

  20. Accurate transition rates for intercombination lines of singly ionized nitrogen

    The transition energies and rates for the 2s22p23P1,2-2s2p35S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p31,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  1. An Accurate Flux Density Scale from 1 to 50 GHz

    Perley, Rick A

    2012-01-01

    We develop an absolute flux density scale for cm-wavelength astronomy by combining accurate flux density ratios determined by the VLA between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by WMAP. The radio sources 3C123, 3C196, 3C286 and 3C295 are found to be varying at a level of less than ~5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC7027, NGC6542, and MWC349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, ...

  2. Accurate Detection of Rifampicin-Resistant Mycobacterium Tuberculosis Strains.

    Song, Keum-Soo; Nimse, Satish Balasaheb; Kim, Hee Jin; Yang, Jeongseong; Kim, Taisun

    2016-01-01

    In 2013 alone, the death rate among the 9.0 million people infected with Mycobacterium tuberculosis (TB) worldwide was around 14%, which is unacceptably high. An empiric treatment of patients infected with TB or drug-resistant Mycobacterium tuberculosis (MDR-TB) strain can also result in the spread of MDR-TB. The diagnostic tools which are rapid, reliable, and have simple experimental protocols can significantly help in decreasing the prevalence rate of MDR-TB strain. We report the evaluation of the 9G technology based 9G DNAChips that allow accurate detection and discrimination of TB and MDR-TB-RIF. One hundred and thirteen known cultured samples were used to evaluate the ability of 9G DNAChip in the detection and discrimination of TB and MDR-TB-RIF strains. Hybridization of immobilized probes with the PCR products of TB and MDR-TB-RIF strains allow their detection and discrimination. The accuracy of 9G DNAChip was determined by comparing its results with sequencing analysis and drug susceptibility testing. Sequencing analysis showed 100% agreement with the results of 9G DNAChip. The 9G DNAChip showed very high sensitivity (95.4%) and specificity (100%). PMID:26999135

  3. The gene tree delusion.

    Springer, Mark S; Gatesy, John

    2016-01-01

    Higher-level relationships among placental mammals are mostly resolved, but several polytomies remain contentious. Song et al. (2012) claimed to have resolved three of these using shortcut coalescence methods (MP-EST, STAR) and further concluded that these methods, which assume no within-locus recombination, are required to unravel deep-level phylogenetic problems that have stymied concatenation. Here, we reanalyze Song et al.'s (2012) data and leverage these re-analyses to explore key issues in systematics including the recombination ratchet, gene tree stoichiometry, the proportion of gene tree incongruence that results from deep coalescence versus other factors, and simulations that compare the performance of coalescence and concatenation methods in species tree estimation. Song et al. (2012) reported an average locus length of 3.1 kb for the 447 protein-coding genes in their phylogenomic dataset, but the true mean length of these loci (start codon to stop codon) is 139.6 kb. Empirical estimates of recombination breakpoints in primates, coupled with consideration of the recombination ratchet, suggest that individual coalescence genes (c-genes) approach ∼12 bp or less for Song et al.'s (2012) dataset, three to four orders of magnitude shorter than the c-genes reported by these authors. This result has general implications for the application of coalescence methods in species tree estimation. We contend that it is illogical to apply coalescence methods to complete protein-coding sequences. Such analyses amalgamate c-genes with different evolutionary histories (i.e., exons separated by >100,000 bp), distort true gene tree stoichiometry that is required for accurate species tree inference, and contradict the central rationale for applying coalescence methods to difficult phylogenetic problems. In addition, Song et al.'s (2012) dataset of 447 genes includes 21 loci with switched taxonomic names, eight duplicated loci, 26 loci with non-homologous sequences that are

  4. Blood gene expression signatures predict exposure levels

    P.R. Bushel; Heinloth, A. N.; Li, J.; Huang, L.; Chou, J. W.; Boorman, G A; Malarkey, D.E.; Houle, C. D.; S. M. Ward; Wilson, R. E.; Fannin, R. D.; Russo, M W; Watkins, P B; Tennant, R. W.; Paules, R S

    2007-01-01

    To respond to potential adverse exposures properly, health care providers need accurate indicators of exposure levels. The indicators are particularly important in the case of acetaminophen (APAP) intoxication, the leading cause of liver failure in the U.S. We hypothesized that gene expression patterns derived from blood cells would provide useful indicators of acute exposure levels. To test this hypothesis, we used a blood gene expression data set from rats exposed to APAP to train classifie...

  5. Exploring the Optimal Strategy to Predict Essential Genes in Microbes

    Yao Lu

    2011-12-01

    Full Text Available Accurately predicting essential genes is important in many aspects of biology, medicine and bioengineering. In previous research, we have developed a machine learning based integrative algorithm to predict essential genes in bacterial species. This algorithm lends itself to two approaches for predicting essential genes: learning the traits from known essential genes in the target organism, or transferring essential gene annotations from a closely related model organism. However, for an understudied microbe, each approach has its potential limitations. The first is constricted by the often small number of known essential genes. The second is limited by the availability of model organisms and by evolutionary distance. In this study, we aim to determine the optimal strategy for predicting essential genes by examining four microbes with well-characterized essential genes. Our results suggest that, unless the known essential genes are few, learning from the known essential genes in the target organism usually outperforms transferring essential gene annotations from a related model organism. In fact, the required number of known essential genes is surprisingly small to make accurate predictions. In prokaryotes, when the number of known essential genes is greater than 2% of total genes, this approach already comes close to its optimal performance. In eukaryotes, achieving the same best performance requires over 4% of total genes, reflecting the increased complexity of eukaryotic organisms. Combining the two approaches resulted in an increased performance when the known essential genes are few. Our investigation thus provides key information on accurately predicting essential genes and will greatly facilitate annotations of microbial genomes.

  6. [Language gene].

    Takahashi, Hiroshi

    2006-11-01

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. Recent advance in the field of molecular genetics finally discovered 'Language Gene'. Disruption of FOXP2 gene, the firstly identified 'language gene' causes severe speech and language disorder. To elucidate the anatomical basis of language processing in the brain, we examined the expression pattern of FOXP2/Foxp2 genes in the monkey and rat brains through development. We found the preferential expression of FOXP2/Foxp2 in the striosomal compartment of the developing striatum. Thus, we suggest the striatum, particularly striosomal system may participate in neural information processing for language and speech. Our suggestion is consistent with the declarative/ procedural model of language proposed by Ullman (1997, 2001), which the procedural memory-dependent mental grammar is rooted in the basal ganglia and the frontal cortex, and the declarative memory-dependent mental lexicon is rooted in the temporal lobe. PMID:17432197

  7. Gene Silencing

    Kertbundit, Sunee; Juříček, Miloslav; Hall, T.C.

    Dordrecht : Springer, 2010 - (Jain, S.; Brar, D.), s. 631-652 ISBN 978-90-481-2966-9 Institutional research plan: CEZ:AV0Z50380511 Keywords : Gene Silencing * RISC complex Subject RIV: EB - Genetics ; Molecular Biology

  8. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  9. SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection.

    2005-10-01

    Full Text Available Identification of single nucleotide polymorphisms (SNPs and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool, and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov.

  10. TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING

    Khadoudja Ghanem

    2013-03-01

    Full Text Available An intrinsic problem of classifiers based on machine learning (ML methods is that their learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM algorithm to develop an efficient Hidden Markov Model (HMM training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.

  11. Towards More Accurate Clutering Method by Using Dynamic Time Warping

    Khadoudja Ghanem

    2013-04-01

    Full Text Available An intrinsic problem of classifiers based on machine learning (ML methods is that their learning timegrows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM algorithm to develop an efficient Hidden Markov Model (HMM training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods

  12. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  13. CT-Analyst: fast and accurate CBR emergency assessment

    Boris, Jay; Fulton, Jack E., Jr.; Obenschain, Keith; Patnaik, Gopal; Young, Theodore, Jr.

    2004-08-01

    An urban-oriented emergency assessment system for airborne Chemical, Biological, and Radiological (CBR) threats, called CT-Analyst and based on new principles, gives greater accuracy and much greater speed than possible with current alternatives. This paper explains how this has been done. The increased accuracy derives from detailed, three-dimensional CFD computations including, solar heating, buoyancy, complete building geometry specification, trees, wind fluctuations, and particle and droplet distributions (as appropriate). This paper shows how a very finite number of such computations for a given area can be extended to all wind directions and speeds, and all likely sources and source locations using a new data structure called Dispersion Nomographs. Finally, we demonstrate a portable, entirely graphical software tool called CT-Analyst that embodies this entirely new, high-resolution technology and runs effectively on small personal computers. Real-time users don't have to wait for results because accurate answers are available with near zero-latency (that is 10 - 20 scenarios per second). Entire sequences of cases (e.g. a continuously changing source location or wind direction) can be computed and displayed as continuous-action movies. Since the underlying database has been precomputed, the door is wide open for important new real-time, zero-latency functions such as sensor data fusion, backtracking to an unknown source location, and even evacuation route planning. Extensions of the technology to sensor location optimization, buildings, tunnels, and integration with other advanced technologies, e.g. micrometeorology or detailed wind field measurements, will be discussed briefly here.

  14. Copeptin does not accurately predict disease severity in imported malaria

    van Wolfswinkel Marlies E

    2012-01-01

    Full Text Available Abstract Background Copeptin has recently been identified to be a stable surrogate marker for the unstable hormone arginine vasopressin (AVP. Copeptin has been shown to correlate with disease severity in leptospirosis and bacterial sepsis. Hyponatraemia is common in severe imported malaria and dysregulation of AVP release has been hypothesized as an underlying pathophysiological mechanism. The aim of the present study was to evaluate the performance of copeptin as a predictor of disease severity in imported malaria. Methods Copeptin was measured in stored serum samples of 204 patients with imported malaria that were admitted to our Institute for Tropical Diseases in Rotterdam in the period 1999-2010. The occurrence of WHO defined severe malaria was the primary end-point. The diagnostic performance of copeptin was compared to that of previously evaluated biomarkers C-reactive protein, procalcitonin, lactate and sodium. Results Of the 204 patients (141 Plasmodium falciparum, 63 non-falciparum infection, 25 had severe malaria. The Area Under the ROC curve of copeptin for severe disease (0.66 [95% confidence interval 0.59-0.72] was comparable to that of lactate, sodium and procalcitonin. C-reactive protein (0.84 [95% CI 0.79-0.89] had a significantly better performance as a biomarker for severe malaria than the other biomarkers. Conclusions C-reactive protein but not copeptin was found to be an accurate predictor for disease severity in imported malaria. The applicability of copeptin as a marker for severe malaria in clinical practice is limited to exclusion of severe malaria.

  15. Accurate calculation of (31)P NMR chemical shifts in polyoxometalates.

    Pascual-Borràs, Magda; López, Xavier; Poblet, Josep M

    2015-04-14

    We search for the best density functional theory strategy for the determination of (31)P nuclear magnetic resonance (NMR) chemical shifts, δ((31)P), in polyoxometalates. Among the variables governing the quality of the quantum modelling, we tackle herein the influence of the functional and the basis set. The spin-orbit and solvent effects were routinely included. To do so we analysed the family of structures α-[P2W18-xMxO62](n-) with M = Mo(VI), V(V) or Nb(V); [P2W17O62(M'R)](n-) with M' = Sn(IV), Ge(IV) and Ru(II) and [PW12-xMxO40](n-) with M = Pd(IV), Nb(V) and Ti(IV). The main results suggest that, to date, the best procedure for the accurate calculation of δ((31)P) in polyoxometalates is the combination of TZP/PBE//TZ2P/OPBE (for NMR//optimization step). The hybrid functionals (PBE0, B3LYP) tested herein were applied to the NMR step, besides being more CPU-consuming, do not outperform pure GGA functionals. Although previous studies on (183)W NMR suggested that the use of very large basis sets like QZ4P were needed for geometry optimization, the present results indicate that TZ2P suffices if the functional is optimal. Moreover, scaling corrections were applied to the results providing low mean absolute errors below 1 ppm for δ((31)P), which is a step forward in order to confirm or predict chemical shifts in polyoxometalates. Finally, via a simplified molecular model, we establish how the small variations in δ((31)P) arise from energy changes in the occupied and virtual orbitals of the PO4 group. PMID:25738630

  16. Accurate mobile malware detection and classification in the cloud.

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service. PMID:26543718

  17. Biased Gene Fractionation and Dominant Gene Expression among the Subgenomes of Brassica rapa

    Feng Cheng; Jian Wu; Lu Fang; Silong Sun; Bo Liu; Ke Lin; Guusje Bonnema; Xiaowu Wang

    2012-01-01

    Polyploidization, both ancient and recent, is frequent among plants. A ‘‘two-step theory’’ was proposed to explain the meso-triplication of the Brassica ‘‘A’’ genome: Brassica rapa. By accurately partitioning of this genome, we observed that genes in the less fractioned subgenome (LF) were dominantly expressed over the genes in more fractioned subgenomes (MFs: MF1 and MF2), while the genes in MF1 were slightly dominantly expressed over the genes in MF2. The results indicated that the dominant...

  18. Accurate reconstruction of insertion-deletion histories by statistical phylogenetics.

    Oscar Westesson

    Full Text Available The Multiple Sequence Alignment (MSA is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history, it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes.

  19. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however

  20. An accurate and simple quantum model for liquid water.

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  1. How utilities can achieve more accurate decommissioning cost estimates

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  2. Fast and accurate line scanner based on white light interferometry

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  3. Fast and accurate predictions of covalent bonds in chemical space.

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  4. Fast and accurate predictions of covalent bonds in chemical space

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  5. New simple method for fast and accurate measurement of volumes

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  6. Improved management of radiotherapy departments through accurate cost data

    Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  7. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  8. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  9. History and progress on accurate measurements of the Planck constant.

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  10. History and progress on accurate measurements of the Planck constant

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  11. History and progress on accurate measurements of the Planck constant

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10−34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  12. An Accurate Method for Inferring Relatedness in Large Datasets of Unphased Genotypes via an Embedded Likelihood-Ratio Test

    Rodriguez, Jesse M.

    2013-01-01

    Studies that map disease genes rely on accurate annotations that indicate whether individuals in the studied cohorts are related to each other or not. For example, in genome-wide association studies, the cohort members are assumed to be unrelated to one another. Investigators can correct for individuals in a cohort with previously-unknown shared familial descent by detecting genomic segments that are shared between them, which are considered to be identical by descent (IBD). Alternatively, elevated frequencies of IBD segments near a particular locus among affected individuals can be indicative of a disease-associated gene. As genotyping studies grow to use increasingly large sample sizes and meta-analyses begin to include many data sets, accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting related pairs of individuals and shared haplotypic segments within these pairs. PARENTE is a computationally-efficient method based on an embedded likelihood ratio test. As demonstrated by the results of our simulations, our method exhibits better accuracy than the current state of the art, and can be used for the analysis of large genotyped cohorts. PARENTE\\'s higher accuracy becomes even more significant in more challenging scenarios, such as detecting shorter IBD segments or when an extremely low false-positive rate is required. PARENTE is publicly and freely available at http://parente.stanford.edu/. © 2013 Springer-Verlag.

  13. Simplifying gene trees for easier comprehension

    Mundry Marvin

    2006-04-01

    Full Text Available Abstract Background In the genomic age, gene trees may contain large amounts of data making them hard to read and understand. Therefore, an automated simplification is important. Results We present a simplification tool for gene trees called TreeSimplifier. Based on species tree information and HUGO gene names, it summarizes "monophyla". These monophyla correspond to subtrees of the gene tree where the evolution of a gene follows species phylogeny, and they are simplified to single leaves in the gene tree. Such a simplification may fail, for example, due to genes in the gene tree that are misplaced. In this way, misplaced genes can be identified. Optionally, our tool glosses over a limited degree of "paraphyly" in a further simplification step. In both simplification steps, species can be summarized into groups and treated as equivalent. In the present study we used our tool to derive a simplified tree of 397 leaves from a tree of 1138 leaves. Comparing the simplified tree to a "cartoon tree" created manually, we note that both agree to a high degree. Conclusion Our automatic simplification tool for gene trees is fast, accurate, and effective. It yields results of similar quality as manual simplification. It should be valuable in phylogenetic studies of large protein families. The software is available at http://www.uni-muenster.de/Bioinformatics/services/treesim/.

  14. PANTHER version 7: improved phylogenetic trees, orthologs and collaboration with the Gene Ontology Consortium

    Mi, Huaiyu; Dong, Qing; Muruganujan, Anushya; Gaudet, Pascale; Lewis, Suzanna; Thomas, Paul D

    2009-01-01

    Protein Analysis THrough Evolutionary Relationships (PANTHER) is a comprehensive software system for inferring the functions of genes based on their evolutionary relationships. Phylogenetic trees of gene families form the basis for PANTHER and these trees are annotated with ontology terms describing the evolution of gene function from ancestral to modern day genes. One of the main applications of PANTHER is in accurate prediction of the functions of uncharacterized genes, based on their evolu...

  15. CAGE: Combinatorial Analysis of Gene-Cluster Evolution

    Song, Giltae; Zhang, Louxin; Vinar, Tomas; Miller, Webb

    2010-01-01

    Much important evolutionary activity occurs in gene clusters, where a copy of a gene may be free to acquire new functions. Current computational methods to extract evolutionary information from sequence data for such clusters are suboptimal, in part because accurate sequence data are often lacking in these genomic regions, making existing methods difficult to apply. We describe a new method for reconstructing the recent evolutionary history of gene clusters, and evaluate its performance on bo...

  16. Bacterial reference genes for gene expression studies by RT-qPCR: survey and analysis.

    Rocha, Danilo J P; Santos, Carolina S; Pacheco, Luis G C

    2015-09-01

    The appropriate choice of reference genes is essential for accurate normalization of gene expression data obtained by the method of reverse transcription quantitative real-time PCR (RT-qPCR). In 2009, a guideline called the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) highlighted the importance of the selection and validation of more than one suitable reference gene for obtaining reliable RT-qPCR results. Herein, we searched the recent literature in order to identify the bacterial reference genes that have been most commonly validated in gene expression studies by RT-qPCR (in the first 5 years following publication of the MIQE guidelines). Through a combination of different search parameters with the text mining tool MedlineRanker, we identified 145 unique bacterial genes that were recently tested as candidate reference genes. Of these, 45 genes were experimentally validated and, in most of the cases, their expression stabilities were verified using the software tools geNorm and NormFinder. It is noteworthy that only 10 of these reference genes had been validated in two or more of the studies evaluated. An enrichment analysis using Gene Ontology classifications demonstrated that genes belonging to the functional categories of DNA Replication (GO: 0006260) and Transcription (GO: 0006351) rendered a proportionally higher number of validated reference genes. Three genes in the former functional class were also among the top five most stable genes identified through an analysis of gene expression data obtained from the Pathosystems Resource Integration Center. These results may provide a guideline for the initial selection of candidate reference genes for RT-qPCR studies in several different bacterial species. PMID:26149127

  17. Reference genes for gene expression studies in wheat flag leaves grown under different farming conditions

    Cordeiro Raposo Fernando

    2011-09-01

    Full Text Available Abstract Background Internal control genes with highly uniform expression throughout the experimental conditions are required for accurate gene expression analysis as no universal reference genes exists. In this study, the expression stability of 24 candidate genes from Triticum aestivum cv. Cubus flag leaves grown under organic and conventional farming systems was evaluated in two locations in order to select suitable genes that can be used for normalization of real-time quantitative reverse-transcription PCR (RT-qPCR reactions. The genes were selected among the most common used reference genes as well as genes encoding proteins involved in several metabolic pathways. Findings Individual genes displayed different expression rates across all samples assayed. Applying geNorm, a set of three potential reference genes were suitable for normalization of RT-qPCR reactions in winter wheat flag leaves cv. Cubus: TaFNRII (ferredoxin-NADP(H oxidoreductase; AJ457980.1, ACT2 (actin 2; TC234027, and rrn26 (a putative homologue to RNA 26S gene; AL827977.1. In addition of these three genes that were also top-ranked by NormFinder, two extra genes: CYP18-2 (Cyclophilin A, AY456122.1 and TaWIN1 (14-3-3 like protein, AB042193 were most consistently stably expressed. Furthermore, we showed that TaFNRII, ACT2, and CYP18-2 are suitable for gene expression normalization in other two winter wheat varieties (Tommi and Centenaire grown under three treatments (organic, conventional and no nitrogen and a different environment than the one tested with cv. Cubus. Conclusions This study provides a new set of reference genes which should improve the accuracy of gene expression analyses when using wheat flag leaves as those related to the improvement of nitrogen use efficiency for cereal production.

  18. Selection of reference genes for expression analysis in the entomophthoralean fungus Pandora neoaphidis

    Eilenberg, Jørgen; Jensen, Annette Bruun; Chen, Chun

    2016-01-01

    The selection of suitable reference genes is crucial for accurate quantification of gene expres-sion and can add to our understanding of host–pathogen interactions. To identify suitablereference genes in Pandora neoaphidis, an obligate aphid pathogenic fungus, the expressionof three traditional...

  19. An atlas of gene expression and gene co-regulation in the human retina.

    Pinelli, Michele; Carissimo, Annamaria; Cutillo, Luisa; Lai, Ching-Hung; Mutarelli, Margherita; Moretti, Maria Nicoletta; Singh, Marwah Veer; Karali, Marianthi; Carrella, Diego; Pizzo, Mariateresa; Russo, Francesco; Ferrari, Stefano; Ponzin, Diego; Angelini, Claudia; Banfi, Sandro; di Bernardo, Diego

    2016-07-01

    The human retina is a specialized tissue involved in light stimulus transduction. Despite its unique biology, an accurate reference transcriptome is still missing. Here, we performed gene expression analysis (RNA-seq) of 50 retinal samples from non-visually impaired post-mortem donors. We identified novel transcripts with high confidence (Observed Transcriptome (ObsT)) and quantified the expression level of known transcripts (Reference Transcriptome (RefT)). The ObsT included 77 623 transcripts (23 960 genes) covering 137 Mb (35 Mb new transcribed genome). Most of the transcripts (92%) were multi-exonic: 81% with known isoforms, 16% with new isoforms and 3% belonging to new genes. The RefT included 13 792 genes across 94 521 known transcripts. Mitochondrial genes were among the most highly expressed, accounting for about 10% of the reads. Of all the protein-coding genes in Gencode, 65% are expressed in the retina. We exploited inter-individual variability in gene expression to infer a gene co-expression network and to identify genes specifically expressed in photoreceptor cells. We experimentally validated the photoreceptors localization of three genes in human retina that had not been previously reported. RNA-seq data and the gene co-expression network are available online (http://retina.tigem.it). PMID:27235414

  20. Mathematical Models of Gene Regulation

    Mackey, Michael C.

    2004-03-01

    This talk will focus on examples of mathematical models for the regulation of repressible operons (e.g. the tryptophan operon), inducible operons (e.g. the lactose operon), and the lysis/lysogeny switch in phage λ. These ``simple" gene regulatory elements can display characteristics experimentally of rapid response to perturbations and bistability, and biologically accurate mathematical models capture these aspects of the dynamics. The models, if realistic, are always nonlinear and contain significant time delays due to transcriptional and translational delays that pose substantial problems for the analysis of the possible ranges of dynamics.

  1. Biased Gene Fractionation and Dominant Gene Expression among the Subgenomes of Brassica rapa

    Cheng, F.; Wu, J.; Fang, L.; Sun, S.; Liu, B.; Lin, K.; Bonnema, A.B.; Wang, Xiaowu

    2012-01-01

    Polyploidization, both ancient and recent, is frequent among plants. A ‘‘two-step theory’’ was proposed to explain the meso-triplication of the Brassica ‘‘A’’ genome: Brassica rapa. By accurately partitioning of this genome, we observed that genes in the less fractioned subgenome (LF) were dominantl

  2. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    Duncan K Ralph

    2016-01-01

    Full Text Available VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion. Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/, is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.

  3. Genometa--a fast and accurate classifier for short metagenomic shotgun reads.

    Colin F Davenport

    Full Text Available Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.

  4. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites.

    Marc Santolini

    Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting

  5. Selection of reference genes in canine uterine tissues.

    Du, M; Wang, X; Yue, Y W; Zhou, P Y; Yao, W; Li, X; Ding, X B; Liu, X F; Guo, H; Ma, W Z

    2016-01-01

    Real-time quantitative polymerase chain reaction (RT-qPCR) is usually employed in gene expression studies in veterinary research, including in studies on canine pyometra. Canine pyometra is a common clinical disease in bitches. When using RT-qPCR, internal standards, such as reference genes, are necessary to investigate relative gene expression by quantitative measurements of mRNA levels. The aim of this study was to evaluate the stability of reference genes and select reference genes suitable for canine pyometra studies. We collected 24 bitch uterine tissue samples, including five healthy and 19 pyometra infected samples. These were used to screen the best reference genes of seven candidate genes (18SrRNA, ACTB, B2M, GAPDH, HPRT, RPL13A, and YWHAZ). The method of KH Sadek and the GeNorm, Normfinder, BestKeeper, and RefFinder software were used to evaluate the stability of gene expression in both pyometra and healthy uterine samples. The results showed that the expression stability of the candidate gene in pyometra and healthy tissues differed. We showed that YWHAZ was the best reference gene, which could be used as an accurate internal control gene in canine pyometra studies. To further validate this recommendation, the expression profile of a target gene insulin-like growth factor 1 receptor gene (IGF1R) was investigated. We found that the expression of IGF1R was significantly altered when different reference genes were used. All reference genes identified in the present study will enable more accurate normalization of gene expression data in both pyometra infected and healthy uterine tissues. PMID:27323194

  6. Genes and Psoriasis

    ... Diet Tips" to find out more! Email * Zipcode Genes and Psoriasis Genes hold the key to understanding ... is responsible for causing psoriatic disease. How do genes work? Genes control everything from height to eye ...

  7. Genes and Hearing Loss

    ... Meeting Calendar Find an ENT Doctor Near You Genes and Hearing Loss Genes and Hearing Loss Patient ... mutation may only have dystopia canthorum. How Do Genes Work? Genes are a road map for the ...

  8. A multiple regression analysis for accurate background subtraction in 99Tcm-DTPA renography

    A technique for accurate background subtraction in 99Tcm-DTPA renography is described. The technique is based on a multiple regression analysis of the renal curves and separate heart and soft tissue curves which together represent background activity. It is compared, in over 100 renograms, with a previously described linear regression technique. Results show that the method provides accurate background subtraction, even in very poorly functioning kidneys, thus enabling relative renal filtration and excretion to be accurately estimated. (author)

  9. On accurate computations of bound state properties in three- and four-electron atomic systems

    Frolov, Alexei M

    2016-01-01

    Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.

  10. Bistable switching asymptotics for the self regulating gene

    A simple stochastic model of a self regulating gene that displays bistable switching is analyzed. While on, a gene transcribes mRNA at a constant rate. Transcription factors can bind to the DNA and affect the gene’s transcription rate. Before an mRNA is degraded, it synthesizes protein, which in turn regulates gene activity by influencing the activity of transcription factors. Protein is slowly removed from the system through degradation. Depending on how the protein regulates gene activity, the protein concentration can exhibit noise induced bistable switching. An asymptotic approximation of the mean switching rate is derived that includes the pre exponential factor, which improves upon a previously reported logarithmically accurate approximation. With the improved accuracy, a uniformly accurate approximation of the stationary probability density, describing the gene, mRNA copy number, and protein concentration is also obtained. (paper)

  11. Predicting gene ontology from a global meta-analysis of 1-color microarray experiments

    Wren Jonathan D; Giles Cory B; Dozmorov Mikhail G

    2011-01-01

    Abstract Background Global meta-analysis (GMA) of microarray data to identify genes with highly similar co-expression profiles is emerging as an accurate method to predict gene function and phenotype, even in the absence of published data on the gene(s) being analyzed. With a third of human genes still uncharacterized, this approach is a promising way to direct experiments and rapidly understand the biological roles of genes. To predict function for genes of interest, GMA relies on a guilt-by...

  12. ASPic-GeneID: A Lightweight Pipeline for Gene Prediction and Alternative Isoforms Detection

    Alioto, Tyler; Picardi, Ernesto; Guigó, Roderic

    2013-01-01

    New genomes are being sequenced at an increasingly rapid rate, far outpacing the rate at which manual gene annotation can be performed. Automated genome annotation is thus necessitated by this growth in genome projects; however, full-fledged annotation systems are usually home-grown and customized to a particular genome. There is thus a renewed need for accurate ab initio gene prediction methods. However, it is apparent that fully ab initio methods fall short of the required level of sensitivity and specificity for a quality annotation. Evidence in the form of expressed sequences gives the single biggest improvement in accuracy when used to inform gene predictions. Here, we present a lightweight pipeline for first-pass gene prediction on newly sequenced genomes. The two main components are ASPic, a program that derives highly accurate, albeit not necessarily complete, EST-based transcript annotations from EST alignments, and GeneID, a standard gene prediction program, which we have modified to take as evidence intron annotations. The introns output by ASPic CDS predictions is given to GeneID to constrain the exon-chaining process and produce predictions consistent with the underlying EST alignments. The pipeline was successfully tested on the entire C. elegans genome and the 44 ENCODE human pilot regions. PMID:24308000

  13. Biased gene fractionation and dominant gene expression among the subgenomes of Brassica rapa.

    Feng Cheng

    Full Text Available Polyploidization, both ancient and recent, is frequent among plants. A "two-step theory" was proposed to explain the meso-triplication of the Brassica "A" genome: Brassica rapa. By accurately partitioning of this genome, we observed that genes in the less fractioned subgenome (LF were dominantly expressed over the genes in more fractioned subgenomes (MFs: MF1 and MF2, while the genes in MF1 were slightly dominantly expressed over the genes in MF2. The results indicated that the dominantly expressed genes tended to be resistant against gene fractionation. By re-sequencing two B. rapa accessions: a vegetable turnip (VT117 and a Rapid Cycling line (L144, we found that genes in LF had less non-synonymous or frameshift mutations than genes in MFs; however mutation rates were not significantly different between MF1 and MF2. The differences in gene expression patterns and on-going gene death among the three subgenomes suggest that "two-step" genome triplication and differential subgenome methylation played important roles in the genome evolution of B. rapa.

  14. LABEL: fast and accurate lineage assignment with assessment of H5N1 and H9N2 influenza A hemagglutinins.

    Samuel S Shepard

    Full Text Available The evolutionary classification of influenza genes into lineages is a first step in understanding their molecular epidemiology and can inform the subsequent implementation of control measures. We introduce a novel approach called Lineage Assignment By Extended Learning (LABEL to rapidly determine cladistic information for any number of genes without the need for time-consuming sequence alignment, phylogenetic tree construction, or manual annotation. Instead, LABEL relies on hidden Markov model profiles and support vector machine training to hierarchically classify gene sequences by their similarity to pre-defined lineages. We assessed LABEL by analyzing the annotated hemagglutinin genes of highly pathogenic (H5N1 and low pathogenicity (H9N2 avian influenza A viruses. Using the WHO/FAO/OIE H5N1 evolution working group nomenclature, the LABEL pipeline quickly and accurately identified the H5 lineages of uncharacterized sequences. Moreover, we developed an updated clade nomenclature for the H9 hemagglutinin gene and show a similarly fast and reliable phylogenetic assessment with LABEL. While this study was focused on hemagglutinin sequences, LABEL could be applied to the analysis of any gene and shows great potential to guide molecular epidemiology activities, accelerate database annotation, and provide a data sorting tool for other large-scale bioinformatic studies.

  15. Identification of transcriptional signals in Encephalitozoon cuniculi widespread among Microsporidia phylum: support for accurate structural genome annotation

    Wincker Patrick

    2009-12-01

    , 5'UTRs being strongly reduced, these signals can be used to ensure the accurate prediction of translation initiation codons for microsporidian genes and to improve microsporidian genome annotation.

  16. An iterative searching and ranking algorithm for prioritising pharmacogenomics genes.

    Xu, Rong; Wang, Quanqiu

    2013-01-01

    Pharmacogenomics (PGx) studies are to identify genetic variants that may affect drug efficacy and toxicity. A machine understandable drug-gene relationship knowledge is important for many computational PGx studies and for personalised medicine. A comprehensive and accurate PGx-specific gene lexicon is important for automatic drug-gene relationship extraction from the scientific literature, rich knowledge source for PGx studies. In this study, we present a bootstrapping learning technique to rank 33,310 human genes with respect to their relevance to drug response. The algorithm uses only one seed PGx gene to iteratively extract and rank co-occurred genes using 20 million MEDLINE abstracts. Our ranking method is able to accurately rank PGx-specific genes highly among all human genes. Compared to randomly ranked genes (precision: 0.032, recall: 0.013, F1: 0.018), the algorithm has achieved significantly better performance (precision: 0.861, recall: 0.548, F1: 0.662) in ranking the top 2.5% of genes. PMID:23428471

  17. Validation of endogenous control reference genes for normalizing gene expression studies in endometrial carcinoma.

    Ayakannu, Thangesweran; Taylor, Anthony H; Willets, Jonathon M; Brown, Laurence; Lambert, David G; McDonald, John; Davies, Quentin; Moss, Esther L; Konje, Justin C

    2015-09-01

    Real-time quantitative RT-PCR (qRT-PCR) is a powerful technique used for the relative quantification of target genes, using reference (housekeeping) genes for normalization to ensure the generation of accurate and robust data. A systematic examination of the suitability of endogenous reference genes for gene expression studies in endometrial cancer tissues is absent. The aims of this study were therefore to identify and evaluate from the thirty-two possible reference genes from a TaqMan(®) array panel their suitability as an internal control gene. The mathematical software packages geNorm qBasePLUS identified Pumilio homolog 1 (Drosophila) (PUM1), ubiquitin C (UBC), phosphoglycerate kinase (PGK1), mitochondrial ribosomal protein L19 (MRPL19) and peptidylpropyl isomerase A (cyclophilin A) (PPIA) as the best reference gene combination, whilst NormFinder identified MRPL19 as the best single reference gene, with importin 8 (IPO8) and PPIA being the best combination of two reference genes. BestKeeper ranked MRPL19 as the most stably expressed gene. In addition, the study was validated by examining the relative expression of a test gene, which encodes the cannabinoid receptor 1 (CB1). A significant difference in CB1 mRNA expression between malignant and normal endometrium using MRPL19, PPIA, and IP08 in combination was observed. The use of MRPL19, IPO8 and PPIA was identified as the best reference gene combination for the normalization of gene expression levels in endometrial carcinoma. This study demonstrates that the arbitrary selection of endogenous control reference genes for normalization in qRT-PCR studies of endometrial carcinoma, without validation, risks the production of inaccurate data and should therefore be discouraged. PMID:26124453

  18. Validation of a method for accurate and highly reproducible quantification of brain dopamine transporter SPECT studies

    Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna; Khalid, Usman; de Nijs, Robin; Thomsen, Gerda; Knudsen, Gitte M; Svarer, Claus

    2011-01-01

    In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine...

  19. Prediction of Tumor Outcome Based on Gene Expression Data

    Liu Juan; Hitoshi Iba

    2004-01-01

    Gene expression microarray data can be used to classify tumor types. We proposed a new procedure to classify human tumor samples based on microarray gene expressions by using a hybrid supervised learning method called MOEA+WV (Multi-Objective Evolutionary Algorithm+Weighted Voting). MOEA is used to search for a relatively few subsets of informative genes from the high-dimensional gene space, and WV is used as a classification tool. This new method has been applied to predicate the subtypes of lymphoma and outcomes of medulloblastoma. The results are relatively accurate and meaningful compared to those from other methods.

  20. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.

    Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan

    2015-01-01

    Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681

  1. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  2. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  3. Bypassing hazard of housekeeping genes: their evaluation in rat granule neurons treated with cerebrospinal fluid of multiple sclerosis subjects

    Deepali Mathur; Gerardo Lopez-Rodas; Bonaventura Casanova; Francisco Coret-Ferrer

    2015-01-01

    Gene expression studies employing real-time PCR has become an intrinsic part of biomedical research. Appropriate normalization of target gene transcript(s) based on stably expressed housekeeping genes is crucial in individual experimental conditions to obtain accurate results. In multiple sclerosis (MS), several gene expression studies have been undertaken, however, the suitability of housekeeping genes to express stably in this disease is not yet explored. Recent research suggests that their...

  4. Validation of reference genes for gene expression studies in the emerald ash borer (Agrilus planipennis)

    Swapna Priya Rajarapu; Praveen Mamidala; Omprakash Mittapalli

    2012-01-01

    The Emerald ash borer (EAB,Agrilus planipennis Fairmaire) an exotic invasive insect pest has killed millions of ash trees (Fraxinus spp.) across North America and threatens billions more.We validated six A.planipennis reference genes (actin,ACT; beta tubulin,β- TUB; glyceraldehyde-3-phosphate dehydrogenase,GAPDH; ribosomal protein,RPL7; translation elongation factor 1α,TEF-1α; and ubiquitin,UBQ) using geNorm,Normfinder and BestKeeper for accurate determination of target messenger RNA levels in gene expression studies.The stability of the six reference genes was evaluated in different larval tissues,developmental stages and two treatments ofA.planipennis using quantitative real-time polymerase chain reaction.Although there was no consistent ranking observed among the reference genes across the samples,the overall analysis revealed TEF-1α as the most stable reference gene.GAPDH and ACT showed least stability for all the samples studied.We conclude that TEF-1α is the most appropriate reference gene for gene expression studies inA.planipennis.Results obtained can be applicable for transcript profiling in other invasive insect pests.Further,these validated reference genes could also serve as the basis for selection of candidate reference genes in any given insect system post-validation.

  5. The Design of Hyper-sphere SVM Based on Incremental Learning%基于增量学习的超球支持向量机设计

    张曦煌; 须文波

    2006-01-01

    增量学习是通过从已知样本出发对未知样本进行识别和分类,并能够继续学习的方法和原则.论文在分析了HS-SVM的理论基础后,基于Joachims的直推式SVM分类算法,提出了直推式THS-SVM算法,同时,独立提出了简单自学习的SHS-SVM学习方法.THS-SVM和SHS-SVM能够在训练过程中不断学习无标签样本的信息.实验表明将THS-SVM和SHS-SVM用于基于内容的图像检索是有效的.

  6. 基于多微商核函数的SVM话者确认%Multiple Derivative Kernel for SVM Based Speaker Verification

    许敏强; 戴蓓蒨; 刘青松; 许东星

    2011-01-01

    A multiple derivative kernel (MDK) based method is proposed, combining Gaussian mixture model (GMM) and support vector machine (SVM), and it is applied to text-independent speaker verification. In order to combine GMM and SVM, MDK computes multiple derivatives from speaker feature distribution, which is modeled by GMM. Then, the multiple derivatives are taken as the input of SVM. The framework of the multiple derivative kernel based SVM method (MDK-SVM) for speaker verification is as follows. Firstly, features are abstracted from utterances and are compensated using factor analysis method in the feature domain. Secondly, these features are used for training GMM distribution. Thirdly, multiple derivative kernel is computed from the GMM distribution, and used as the input of the SVMs for speaker modeling. Finally, the performance of MDK-SVM is evaluated on the NIST SRE 01 2min-lmin dataset. The proposed MDK-SVM system gives reduction in equal error rate (EER) and minimum detection cost function (MinDCF) compared with factor analysis Gaussian mixture model (FAGMM) system, Fisher kernel SVM system and Kullback-Leibler divergence based SVM system.%给出了一种基于多微商核函数(MDK)的结合高斯混合模型(GMM)和支持向量机(SVM)的方法,并应用于SVM文本无关话者确认.从GMM话者语音特征概率分布出发,用多阶微商描述GMM概率分布,将GMM和SVM结合的问题转化为用多阶微商建立SVM话者模型的问题.首先对说话人语音进行基于因子分析的参数域失配补偿,用GMM描述失配补偿后的话者语音特征的概率分布;然后对GMM求多阶微商;最后构建多微商核函数,建立多SVM话者模型.在NIST' 01 2min-1min话者确认数据库上的实验表明,基于多微商棱函数的SVM话者确认系统性能优于基于失配补偿的GMM系统,也比基于失配补偿的Fisher核函数SVM话者系统和基于失配补偿的Kullback-Leibler(KL)距离SVM话者系统有较大的提高.

  7. SVM-Based Algorithm for Waymark Recognition of Mobile Robots%基于SVM的移动机器人路标识别算法

    易东; 黄玉清

    2009-01-01

    A waymark recognition algorithm for mobile robot based on Support Vector Machine (SVM) is proposed. Its process traffic sign uses SVM classification. The method shows its learning ability and fault tolerance in various circumstances such as different light intensity, image distortion, shadowing and long shooting distance. Experiment results show the waymark' s classification is efficient and it has robustness in complicated outdoor environment.%提出一种基于支持向量机的移动机器人路标识别算法,其利用SVM分类器对大量的交通标志进行识别.该方法对于场景光线强度变化,图像存在形变以及路标存在阴影遮挡和距离较远等情况都具有学习能力与容错性.实验结果表明,基于SVM的移动机器人路标识别分类器在复杂的室外环境下是有效的、具有鲁棒性的.

  8. A Study on SVM Based on the Weighted Elitist Teaching-Learning-Based Optimization and Application in the Fault Diagnosis of Chemical Process

    Cao Junxiang

    2015-01-01

    Full Text Available Teaching-Learning-Based Optimization (TLBO is a new swarm intelligence optimization algorithm that simulates the class learning process. According to such problems of the traditional TLBO as low optimizing efficiency and poor stability, this paper proposes an improved TLBO algorithm mainly by introducing the elite thought in TLBO and adopting different inertia weight decreasing strategies for elite and ordinary individuals of the teacher stage and the student stage. In this paper, the validity of the improved TLBO is verified by the optimizations of several typical test functions and the SVM optimized by the weighted elitist TLBO is used in the diagnosis and classification of common failure data of the TE chemical process. Compared with the SVM combining other traditional optimizing methods, the SVM optimized by the weighted elitist TLBO has a certain improvement in the accuracy of fault diagnosis and classification.

  9. A SVM-Based Multi-Resolution Procedure for the Estimation of the DOAS of Interfering Signals in a Communication System

    Donelli, Massimo; Azaro, Renzo; Lizzi, Leonardo; Viani, Federico; Massa, Andrea

    2011-01-01

    In this work, the use of a planar antenna system for the estimation of the directions of arrivals (DOAs) of multiple signals impinging on the receiver has been considered. Towards this end, an efficient multi-resolution method based on a SVM-classifier is proposed for determining a probabilitic map of the DOAs of the unknown interfering signals. Numerical results dealing with multiple interferers scenarios in noisy environments are provided in order to assess the feasibility as well as the ca...

  10. 基于SVM的软件行为可信动态评测%SVM-based Trusted Dynamic Measurement of Software Behavior

    郝瑞; 彭新光; 修磊; 刘晨华

    2013-01-01

    针对可信计算组织TCG的信任链无法保障软件运行时动态可信的问题,对该信任链进行拓展,提出了基于SVM的软件行为可信动态度量代理,在信任链把控制权交给操作系统后,继续对其进行完整性度量,然后由度量代理对平台应用软件进行评价,通过对运行过程中的软件行为迹的监测,实现了软件动态行为可信证明.实验分析表明,该方法能够在较短的时间高效实时地检测出软件异常,确保了平台软件运行时可信.%For the problem of the trust chain of trusted computing group (TCG) , which can not ensure the dynamic trustworthiness of running software, we extended the chain and proposed trusted dynamic measure agency of software behavior based on SVM. After the chain of TCG de-liverd the control to the operating system, integrity of measure agency continued to be measured, then, the measure agency measured the applications of platform. The trusted proof of software dynamic behavior was realized by monitoring the behavior trace of running software. Experiments show that the approach detected the anomaly of running software in a short time and ensured the platform application of software with trustworthiness.

  11. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation.

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space

  12. Maximizing biomarker discovery by minimizing gene signatures

    Chang Chang

    2011-12-01

    Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the

  13. Identification and Validation of Reference Genes and Their Impact on Normalized Gene Expression Studies across Cultivated and Wild Cicer Species.

    Dumbala Srinivas Reddy

    Full Text Available Quantitative Real-Time PCR (qPCR is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp. comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock, and five diverse tissues (leaf, root, flower, seedlings and seed. The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses.

  14. Identification and Validation of Reference Genes and Their Impact on Normalized Gene Expression Studies across Cultivated and Wild Cicer Species.

    Reddy, Dumbala Srinivas; Bhatnagar-Mathur, Pooja; Reddy, Palakolanu Sudhakar; Sri Cindhuri, Katamreddy; Sivaji Ganesh, Adusumalli; Sharma, Kiran Kumar

    2016-01-01

    Quantitative Real-Time PCR (qPCR) is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp.) comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock), and five diverse tissues (leaf, root, flower, seedlings and seed). The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD) treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses. PMID:26863232

  15. Selection of reliable reference genes for gene expression studies in peach using real-time PCR

    Zhou Jun

    2009-07-01

    . GAPDH and ACT also performed poorly and were less stable in our analysis. To achieve accurate comparison of levels of gene expression, two or more reference genes must be used for data normalization. The combinations of TEF2/UBQ10/RP II and TEF2/RP II were suggested for use in all samples and subsets, respectively.

  16. Gene gymnastics

    Vijayachandran, Lakshmi S; Thimiri Govinda Raj, Deepak B; Edelweiss, Evelina; Gupta, Kapil; Maier, Josef; Gordeliy, Valentin; Fitzgerald, Daniel J; Berger, Imre

    2013-01-01

    Most essential activities in eukaryotic cells are catalyzed by large multiprotein assemblies containing up to ten or more interlocking subunits. The vast majority of these protein complexes are not easily accessible for high resolution studies aimed at unlocking their mechanisms, due to their low cellular abundance and high heterogeneity. Recombinant overproduction can resolve this bottleneck and baculovirus expression vector systems (BEVS) have emerged as particularly powerful tools for the provision of eukaryotic multiprotein complexes in high quality and quantity. Recently, synthetic biology approaches have begun to make their mark in improving existing BEVS reagents by de novo design of streamlined transfer plasmids and by engineering the baculovirus genome. Here we present OmniBac, comprising new custom designed reagents that further facilitate the integration of heterologous genes into the baculovirus genome for multiprotein expression. Based on comparative genome analysis and data mining, we herein present a blueprint to custom design and engineer the entire baculovirus genome for optimized production properties using a bottom-up synthetic biology approach. PMID:23328086

  17. Three-stage Stiffly Accurate Runge-Kutta Methods for Stiff Stochastic Differential Equations

    WANG PENG

    2011-01-01

    In this paper we discuss diagonally implicit and semi-implicit methods based on the three-stage stiffly accurate Runge-Kutta methods for solving Stratonovich stochastic differential equations (SDEs). Two methods, a three-stage stiffly accurate semi-implicit (SASI3) method and a three-stage stiffly accurate diagonally implicit (SADI3) method, are constructed in this paper. In particular, the truncated random variable is used in the implicit method. The stability properties and numerical results show the effectiveness of these methods in the pathwise approximation of stiff SDEs.

  18. Identification and Validation of Reference Genes for Quantitative Real-Time PCR in Drosophila suzukii (Diptera: Drosophilidae)

    Zhai, Yifan; Lin, Qingcai; Zhou, Xianhong; Zhang, Xiaoyan; Liu, Tingli; Yu, Yi

    2014-01-01

    To accurately evaluate gene expression levels and obtain more accurate quantitative real-time RT-PCR (qRT-PCR) data, normalization relative to reliable reference gene(s) is required. Drosophila suzukii, is an invasive fruit pest native to East Asia, and recently invaded Europe and North America, the stability of its reference genes have not been previously investigated. In this study, ten candidate reference genes (RPL18, RPS3, AK, EF-1β, TBP, NADH, HSP22, GAPDH, Actin, α-Tubulin), were evalu...

  19. Exploring the relationship between fractal features and bacterial essential genes

    Yong-Ming, Yu; Li-Cai, Yang; Qian, Zhou; Lu-Lu, Zhao; Zhi-Ping, Liu

    2016-06-01

    Essential genes are indispensable for the survival of an organism in optimal conditions. Rapid and accurate identifications of new essential genes are of great theoretical and practical significance. Exploring features with predictive power is fundamental for this. Here, we calculate six fractal features from primary gene and protein sequences and then explore their relationship with gene essentiality by statistical analysis and machine learning-based methods. The models are applied to all the currently available identified genes in 27 bacteria from the database of essential genes (DEG). It is found that the fractal features of essential genes generally differ from those of non-essential genes. The fractal features are used to ascertain the parameters of two machine learning classifiers: Naïve Bayes and Random Forest. The area under the curve (AUC) of both classifiers show that each fractal feature is satisfactorily discriminative between essential genes and non-essential genes individually. And, although significant correlations exist among fractal features, gene essentiality can also be reliably predicted by various combinations of them. Thus, the fractal features analyzed in our study can be used not only to construct a good essentiality classifier alone, but also to be significant contributors for computational tools identifying essential genes. Project supported by the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014FM022).

  20. Selection of reference genes for expression analysis in the entomophthoralean fungus Pandora neoaphidis

    Chen, Chun; Xie, Tingna; Ye, Sudan; Jensen, Annette Bruun; Eilenberg, Jørgen

    2016-01-01

    The selection of suitable reference genes is crucial for accurate quantification of gene expres-sion and can add to our understanding of host–pathogen interactions. To identify suitablereference genes in Pandora neoaphidis, an obligate aphid pathogenic fungus, the expressionof three traditional...

  1. Identification of genes associated with multiple cancers via integrative analysis

    Huang Jian

    2009-11-01

    Full Text Available Abstract Background Advancement in gene profiling techniques makes it possible to measure expressions of thousands of genes and identify genes associated with development and progression of cancer. The identified cancer-associated genes can be used for diagnosis, prognosis prediction, and treatment selection. Most existing cancer microarray studies have been focusing on the identification of genes associated with a specific type of cancer. Recent biomedical studies suggest that different cancers may share common susceptibility genes. A comprehensive description of the associations between genes and cancers requires identification of not only multiple genes associated with a specific type of cancer but also genes associated with multiple cancers. Results In this article, we propose the Mc.TGD (Multi-cancer Threshold Gradient Descent, an integrative analysis approach capable of analyzing multiple microarray studies on different cancers. The Mc.TGD is the first regularized approach to conduct "two-dimensional" selection of genes with joint effects on cancer development. Simulation studies show that the Mc.TGD can more accurately identify genes associated with multiple cancers than meta analysis based on "one-dimensional" methods. As a byproduct, identification accuracy of genes associated with only one type of cancer may also be improved. We use the Mc.TGD to analyze seven microarray studies investigating development of seven different types of cancers. We identify one gene associated with six types of cancers and four genes associated with five types of cancers. In addition, we also identify 11, 9, 18, and 17 genes associated with 4 to 1 types of cancers, respectively. We evaluate prediction performance using a Leave-One-Out cross validation approach and find that only 4 (out of 570 subjects cannot be properly predicted. Conclusion The Mc.TGD can identify a short list of genes associated with one or multiple types of cancers. The identified genes

  2. Inferring gene regression networks with model trees

    Aguilar-Ruiz Jesus S

    2010-10-01

    Full Text Available Abstract Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear

  3. A third-order-accurate upwind scheme for Navier-Stokes solutions at high Reynolds numbers

    Agarwal, R. K.

    1981-01-01

    A third-order-accurate upwind scheme is presented for solution of the steady two-dimensional Navier-Stokes equations in stream-function/vorticity form. The scheme is found to be accurate and stable at high Reynolds numbers. A series of test computations is performed on flows with large recirculating regions. In particular, highly accurate solutions are obtained for flow in a driven square cavity up to Reynolds numbers of 10,000. These computations are used to critically evaluate the accuracy of other existing first- and second-order-accurate upwind schemes. In addition, computations are carried out for flow in a channel with symmetric sudden expansion, flow in a channel with a symmetrically placed blunt base, and the flowfield of an impinging jet. Good agreement is obtained with the computations of other investigators as well as with the available experimental data.

  4. Polymorph stability prediction: On the importance of accurate structures: A case study of pyrazinamide

    Wahlberg, N.; Ciochon, P.; Petříček, Václav; Madsen, A. O.

    2014-01-01

    Roč. 14, č. 1 (2014), s. 381-388. ISSN 1528-7483 Institutional support: RVO:68378271 Keywords : accurate structures * disorder * twinning Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 4.891, year: 2014

  5. An accurate bound on tensor-to-scalar ratio and the scale of inflation

    Choudhury, Sayantan

    2014-01-01

    In this paper we provide an accurate bound on tensor-to-scalar ratio (r) for class of models where inflation always occurs below the Planck scale, and the field displacement during inflation remains sub-Planckian.

  6. Efficient and Accurate Computational Framework for Injector Design and Analysis Project

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  7. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    Hassan, Shirin E.

    2012-01-01

    Visually impaired subjects were just as accurate and reliable as normally sighted subjects in making street crossing decisions. When using only auditory information, normally sighted, visually impaired and blind subjects, while reliable, were inaccurate in making street crossing decisions.

  8. Mechanism for accurate, protein-assisted DNA annealing by Deinococcus radiodurans DdrB.

    Sugiman-Marangos, Seiji N; Weiss, Yoni M; Junop, Murray S

    2016-04-19

    Accurate pairing of DNA strands is essential for repair of DNA double-strand breaks (DSBs). How cells achieve accurate annealing when large regions of single-strand DNA are unpaired has remained unclear despite many efforts focused on understanding proteins, which mediate this process. Here we report the crystal structure of a single-strand annealing protein [DdrB (DNA damage response B)] in complex with a partially annealed DNA intermediate to 2.2 Å. This structure and supporting biochemical data reveal a mechanism for accurate annealing involving DdrB-mediated proofreading of strand complementarity. DdrB promotes high-fidelity annealing by constraining specific bases from unauthorized association and only releases annealed duplex when bound strands are fully complementary. To our knowledge, this mechanism provides the first understanding for how cells achieve accurate, protein-assisted strand annealing under biological conditions that would otherwise favor misannealing. PMID:27044084

  9. Principles of gene therapy

    Mammen Biju; Ramakrishnan T; Sudhakar Uma; Vijayalakshmi

    2007-01-01

    Genes are specific sequences of bases that encode instructions to make proteins. When genes are altered so that encoded proteins are unable to carry out their normal functions, genetic disorders can result. Gene therapy is designed to introduce genetic material into cells to compensate for abnormal genes or to make a beneficial protein. This article reviews the fundamentals in gene therapy and its various modes of administration with an insight into the role of gene therapy in Periodontics an...

  10. Evolutionary History Reconstruction for Mammalian Complex Gene Clusters

    Zhang, Yu; Song, Giltae; Vinař, Tomáš; Green, Eric D; Siepel, Adam; Miller, Webb

    2009-01-01

    Clusters of genes that evolved from single progenitors via repeated segmental duplications present significant challenges to the generation of a truly complete human genome sequence. Such clusters can confound both accurate sequence assembly and downstream computational analysis, yet they represent a hotbed of functional innovation, making them of extreme interest. We have developed an algorithm for reconstructing the evolutionary history of gene clusters using only human genomic sequence dat...

  11. Multicolor labeling in developmental gene regulatory network analysis.

    Sethi, Aditya J; Angerer, Robert C; Angerer, Lynne M

    2014-01-01

    The sea urchin embryo is an important model system for developmental gene regulatory network (GRN) analysis. This chapter describes the use of multicolor fluorescent in situ hybridization (FISH) as well as a combination of FISH and immunohistochemistry in sea urchin embryonic GRN studies. The methods presented here can be applied to a variety of experimental settings where accurate spatial resolution of multiple gene products is required for constructing a developmental GRN. PMID:24567220

  12. Accurate Modeling of the Polysilicon-Insulator-Well (PIW) Capacitor in CMOS Technologies

    JAMASB, Shahriar; MOOSAVİ, Roya

    2015-01-01

    Abstract. A practical method enabling rapid development of an accurate device model for the PIW MOS capacitor is introduced. The simultaneous improvement in accuracy and development time can be achieved without having to perform extensive measurements on specialized test structures by taking advantage of the MOS transistor model parameters routinely extracted in support of analog circuit design activities. This method affords accurate modeling of the voltage coefficient of capacitance over th...

  13. High-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion

    Moiseev, N. Ya.

    2011-04-01

    An approach to the construction of high-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion is proposed. Godunov's theorem on monotone schemes is used to construct a new antidiffusion flux limiter in high-order accurate difference schemes as applied to linear advection equations with constant coefficients. The efficiency of the approach is demonstrated by solving linear advection equations with constant coefficients and one-dimensional gasdynamic equations.

  14. Biochip System for Rapid and Accurate Identification of Mycobacterial Species from Isolates and Sputum▿

    Zhu, Lingxiang; Jiang, Guanglu; Wang, Shengfen; Wang, Can; Li, Qiang; Yu, Hao; Zhou, Yang; Zhao, Bing; Huang, Hairong; Xing, Wanli; Mitchelson, Keith; Cheng, Jing; Zhao, Yanlin; Guo, Yong

    2010-01-01

    The accurate detection of mycobacterial species from isolates and clinical samples is important for pathogenic diagnosis and treatment and for disease control. There is an urgent need for the development of a rapid, simple, and accurate detection method. We established a biochip assay system, including a biochip, sample preparation apparatus, hybridization instrument, chip washing machine, and laser confocal scanner equipped with interpretation software for automatic diagnosis. The biochip si...

  15. Equation of state for hard sphere fluids offering accurate virial coefficients

    Tian, Jianxiang; Gui, Yuanxing; Mulero, A

    2016-01-01

    The asymptotic expansion method is extended by using currently available accurate values for the first ten virial coefficients for hard sphere fluids. It is then used to yield an equation of state for hard sphere fluids, which accurately represents the currently accepted values for the first sixteen virial coefficients and compressibility factor data in both the stable and the metastable regions of the phase diagram.

  16. Automated Selected Reaction Monitoring Software for Accurate Label-Free Protein Quantification

    Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik

    2012-01-01

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently...

  17. Rapid and Accurate Evaluation of the Quality of Commercial Organic Fertilizers Using Near Infrared Spectroscopy

    Chang Wang; Chichao Huang; Jian Qian; Jian Xiao; Huan Li; Yongli Wen; Xinhua He; Wei Ran; Qirong Shen; Guanghui Yu

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected fro...

  18. A Decision Tree of Bigrams is an Accurate Predictor of Word Sense

    Pedersen, Ted

    2001-01-01

    This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.

  19. The value of more accurate customer profitability reports: Does cost complexity matter?

    Cardinaels, Eddy; Roodhooft, Filip; Warlop, Luk; Van Herck, Gustaaf

    2002-01-01

    This paper reports experimental evidence on the potential benefits of reporting different levels of customer-related information in a pricing decision context. The paper mainly focuses on the influence of the complexity of the cost environment -measured by the degree of heterogeneity across customers- on the value of more accurate customer profitability systems. Contrary to the findings of Gupta & King (1997) our results indicate that the value of more accurate cost reports increases, as the ...

  20. Energy functions for protein design I: Efficient and accurate continuum electrostatics and solvation

    Pokala, Navin; Handel, Tracy M.

    2004-01-01

    Electrostatics and solvation energies are important for defining protein stability, structural specificity, and molecular recognition. Because these energies are difficult to compute quickly and accurately, they are often ignored or modeled very crudely in computational protein design. To address this problem, we have developed a simple, fast, and accurate approximation for calculating Born radii in the context of protein design calculations. When these approximate Born radii are used with th...