WorldWideScience

Sample records for accurate svm-based gene

  1. A New SVM-Based Modeling Method of Cabin Path Loss Prediction

    OpenAIRE

    Xiaonan Zhao; Chunping Hou; Qing Wang

    2013-01-01

    A new modeling method of cabin path loss prediction based on support vector machine (SVM) is proposed in this paper. The method is trained with the path loss values of measured points inside the cabin and can be used to predict the path loss values of the unmeasured points. The experimental results demonstrate that our modeling method is more accurate than the curve fitting method. This SVM-based path loss prediction method makes the prediction much easier and more accurate, which covers perf...

  2. SVM-based Transfer of Visual Knowledge Across Robotic Platforms

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara

    2007-01-01

    This paper presents an SVM-based algorithm for the transfer of knowledge across robot platforms aiming to perform the same task. Our method exploits efficiently the transferred knowledge while updating incrementally the internal representation as new information is available. The algorithm is adaptive and tends to privilege new data when building the SV solution. This prevents the old knowledge to nest into the model and eventually become a possible source of misleading information. We tested...

  3. Efficient SVM-based Recognition of Chinese Personal Names

    Institute of Scientific and Technical Information of China (English)

    Yu Ying(宇缨); Wang Xiaolong; Liu Bingquan; Wang Hui

    2004-01-01

    This paper provides a flexible and efficient method to identify Chinese personal names based on SVM (Support Vector Machines). In its approach, forming rules of personal name is employed to select candidate set, then SVM based identification strategies is used to recognize real personal name in the candidate set. Basic semanteme of word in context and frequency information of word inside candidate are selected as features in its methodology, which reduce the feature space scale dramatically and calculate more efficiently. Results of open testing achieved F-measure 90.59% in 2 million words news and F-measure 86.67% in 16.17 million words news based on this project.

  4. SVM based layout retargeting for fast and regularized inverse lithography

    Institute of Scientific and Technical Information of China (English)

    Kai-sheng LUO; Zheng SHI; Xiao-lang YAN; Zhen GENG

    2014-01-01

    Inverse lithography technology (ILT), also known as pixel-based optical proximity correction (PB-OPC), has shown promising capability in pushing the current 193 nm lithography to its limit. By treating the mask optimization process as an inverse problem in lithography, ILT provides a more complete exploration of the solution space and better pattern fidelity than the tradi-tional edge-based OPC. However, the existing methods of ILT are extremely time-consuming due to the slow convergence of the optimization process. To address this issue, in this paper we propose a support vector machine (SVM) based layout retargeting method for ILT, which is designed to generate a good initial input mask for the optimization process and promote the convergence speed. Supervised by optimized masks of training layouts generated by conventional ILT, SVM models are learned and used to predict the initial pixel values in the‘undefined areas’ of the new layout. By this process, an initial input mask close to the final optimized mask of the new layout is generated, which reduces iterations needed in the following optimization process. Manu-facturability is another critical issue in ILT;however, the mask generated by our layout retargeting method is quite irregular due to the prediction inaccuracy of the SVM models. To compensate for this drawback, a spatial filter is employed to regularize the retargeted mask for complexity reduction. We implemented our layout retargeting method with a regularized level-set based ILT (LSB-ILT) algorithm under partially coherent illumination conditions. Experimental results show that with an initial input mask generated by our layout retargeting method, the number of iterations needed in the optimization process and runtime of the whole process in ILT are reduced by 70.8%and 69.0%, respectively.

  5. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    OpenAIRE

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  6. Accurate prediction of secondary metabolite gene clusters in filamentous fungi

    DEFF Research Database (Denmark)

    Andersen, Mikael Rørdam; Nielsen, Jakob Blæsbjerg; Klitgaard, Andreas;

    2013-01-01

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify...... supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent...... of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross...

  7. svm PRAT: SVM-based Protein Residue Annotation Toolkit

    OpenAIRE

    Rangwala, Huzefa; Kauffman, Christopher; Karypis, George

    2009-01-01

    Background Over the last decade several prediction methods have been developed for determining the structural and functional properties of individual protein residues using sequence and sequence-derived information. Most of these methods are based on support vector machines as they provide accurate and generalizable prediction models. Results We present a general purpose protein residue annotation toolkit (svm PRAT) to allow biologists to formulate residue-wise prediction problems. svm PRAT f...

  8. Advances in SVM-Based System Using GMM Super Vectors for Text-Independent Speaker Verification

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jian; DONG Yuan; ZHAO Xianyu; YANG Hao; LU Liang; WANG Haila

    2008-01-01

    For text-independent speaker verification,the Gaussian mixture model (GMM) using a universal background model strategy and the GMM using support vector machines are the two most commonly used methodologies.Recently,a new SVM-based speaker verification method using GMM super vectors has been proposed.This paper describes the construction of a new speaker verification system and investigates the use of nuisance attribute projection and test normalization to further enhance performance.Experiments were conducted on the core test of the 2006 NIST speaker recognition evaluation corpus.The experimental results indicate that an SVM-based speaker verification system using GMM super vectors can achieve ap-pealing performance.With the use of nuisance attribute projection and test normalization,the system per-formance can be significantly improved,with improvements in the equal error rate from 7.78% to 4.92% and detection cost function from 0.0376 to 0.0251.

  9. Rapid and accurate synthesis of TALE genes from synthetic oligonucleotides.

    Science.gov (United States)

    Wang, Fenghua; Zhang, Hefei; Gao, Jingxia; Chen, Fengjiao; Chen, Sijie; Zhang, Cuizhen; Peng, Gang

    2016-01-01

    Custom synthesis of transcription activator-like effector (TALE) genes has relied upon plasmid libraries of pre-fabricated TALE-repeat monomers or oligomers. Here we describe a novel synthesis method that directly incorporates annealed synthetic oligonucleotides into the TALE-repeat units. Our approach utilizes iterative sets of oligonucleotides and a translational frame check strategy to ensure the high efficiency and accuracy of TALE-gene synthesis. TALE arrays of more than 20 repeats can be constructed, and the majority of the synthesized constructs have perfect sequences. In addition, this novel oligonucleotide-based method can readily accommodate design changes to the TALE repeats. We demonstrated an increased gene targeting efficiency against a genomic site containing a potentially methylated cytosine by incorporating non-conventional repeat variable di-residue (RVD) sequences.

  10. Diagnosis of Elevator Faults with LS-SVM Based on Optimization by K-CV

    Directory of Open Access Journals (Sweden)

    Zhou Wan

    2015-01-01

    Full Text Available Several common elevator malfunctions were diagnosed with a least square support vector machine (LS-SVM. After acquiring vibration signals of various elevator functions, their energy characteristics and time domain indicators were extracted by theoretically analyzing the optimal wavelet packet, in order to construct a feature vector of malfunctions for identifying causes of the malfunctions as input of LS-SVM. Meanwhile, parameters about LS-SVM were optimized by K-fold cross validation (K-CV. After diagnosing deviated elevator guide rail, deviated shape of guide shoe, abnormal running of tractor, erroneous rope groove of traction sheave, deviated guide wheel, and tension of wire rope, the results suggested that the LS-SVM based on K-CV optimization was one of effective methods for diagnosing elevator malfunctions.

  11. Accurate and unambiguous tag-to-gene mapping in serial analysis of gene expression

    Directory of Open Access Journals (Sweden)

    Melo Francisco

    2006-11-01

    Full Text Available Abstract Background In this study, we present a robust and reliable computational method for tag-to-gene assignment in serial analysis of gene expression (SAGE. The method relies on current genome information and annotation, incorporation of several new features, and key improvements over alternative methods, all of which are important to determine gene expression levels more accurately. The method provides a complete annotation of potential virtual SAGE tags within a genome, along with an estimation of their confidence for experimental observation that ranks tags that present multiple matches in the genome. Results We applied this method to the Saccharomyces cerevisiae genome, producing the most thorough and accurate annotation of potential virtual SAGE tags that is available today for this organism. The usefulness of this method is exemplified by the significant reduction of ambiguous cases in existing experimental SAGE data. In addition, we report new insights from the analysis of existing SAGE data. First, we found that experimental SAGE tags mapping onto introns, intron-exon boundaries, and non-coding RNA elements are observed in all available SAGE data. Second, a significant fraction of experimental SAGE tags was found to map onto genomic regions currently annotated as intergenic. Third, a significant number of existing experimental SAGE tags for yeast has been derived from truncated cDNAs, which are synthesized through oligo-d(T priming to internal poly-(A regions during reverse transcription. Conclusion We conclude that an accurate and unambiguous tag mapping process is essential to increase the quality and the amount of information that can be extracted from SAGE experiments. This is supported by the results obtained here and also by the large impact that the erroneous interpretation of these data could have on downstream applications.

  12. Accurate Gene Expression-Based Biodosimetry Using a Minimal Set of Human Gene Transcripts

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, James D., E-mail: jtucker@biology.biosci.wayne.edu [Department of Biological Sciences, Wayne State University, Detroit, Michigan (United States); Joiner, Michael C. [Department of Radiation Oncology, Wayne State University, Detroit, Michigan (United States); Thomas, Robert A.; Grever, William E.; Bakhmutsky, Marina V. [Department of Biological Sciences, Wayne State University, Detroit, Michigan (United States); Chinkhota, Chantelle N.; Smolinski, Joseph M. [Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan (United States); Divine, George W. [Department of Public Health Sciences, Henry Ford Hospital, Detroit, Michigan (United States); Auner, Gregory W. [Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan (United States)

    2014-03-15

    Purpose: Rapid and reliable methods for conducting biological dosimetry are a necessity in the event of a large-scale nuclear event. Conventional biodosimetry methods lack the speed, portability, ease of use, and low cost required for triaging numerous victims. Here we address this need by showing that polymerase chain reaction (PCR) on a small number of gene transcripts can provide accurate and rapid dosimetry. The low cost and relative ease of PCR compared with existing dosimetry methods suggest that this approach may be useful in mass-casualty triage situations. Methods and Materials: Human peripheral blood from 60 adult donors was acutely exposed to cobalt-60 gamma rays at doses of 0 (control) to 10 Gy. mRNA expression levels of 121 selected genes were obtained 0.5, 1, and 2 days after exposure by reverse-transcriptase real-time PCR. Optimal dosimetry at each time point was obtained by stepwise regression of dose received against individual gene transcript expression levels. Results: Only 3 to 4 different gene transcripts, ASTN2, CDKN1A, GDF15, and ATM, are needed to explain ≥0.87 of the variance (R{sup 2}). Receiver-operator characteristics, a measure of sensitivity and specificity, of 0.98 for these statistical models were achieved at each time point. Conclusions: The actual and predicted radiation doses agree very closely up to 6 Gy. Dosimetry at 8 and 10 Gy shows some effect of saturation, thereby slightly diminishing the ability to quantify higher exposures. Analyses of these gene transcripts may be advantageous for use in a field-portable device designed to assess exposures in mass casualty situations or in clinical radiation emergencies.

  13. LDA-SVM-Based EGFR Mutation Model for NSCLC Brain Metastases

    Science.gov (United States)

    Hu, Nan; Wang, Ge; Wu, Yu-Hao; Chen, Shi-Feng; Liu, Guo-Dong; Chen, Chuan; Wang, Dong; He, Zhong-Shi; Yang, Xue-Qin; He, Yong; Xiao, Hua-Liang; Huang, Ding-De; Xiong, Kun-Lin; Wu, Yan; Huang, Ming; Yang, Zhen-Zhou

    2015-01-01

    Abstract Epidermal growth factor receptor (EGFR) activating mutations are a predictor of tyrosine kinase inhibitor effectiveness in the treatment of non–small-cell lung cancer (NSCLC). The objective of this study is to build a model for predicting the EGFR mutation status of brain metastasis in patients with NSCLC. Observation and model set-up. This study was conducted between January 2003 and December 2011 in 6 medical centers in Southwest China. The study included 31 NSCLC patients with brain metastases. Eligibility requirements were histological proof of NSCLC, as well as sufficient quantity of paraffin-embedded lung and brain metastases specimens for EGFR mutation detection. The linear discriminant analysis (LDA) method was used for analyzing the dimensional reduction of clinical features, and a support vector machine (SVM) algorithm was employed to generate an EGFR mutation model for NSCLC brain metastases. Training-testing-validation (3 : 1 : 1) processes were applied to find the best fit in 12 patients (validation test set) with NSCLC and brain metastases treated with a tyrosine kinase inhibitor and whole-brain radiotherapy. Primary and secondary outcome measures: EGFR mutation analysis in patients with NSCLC and brain metastases and the development of a LDA-SVM-based EGFR mutation model for NSCLC brain metastases patients. EGFR mutation discordance between the primary lung tumor and brain metastases was found in 5 patients. Using LDA, 13 clinical features were transformed into 9 characteristics, and 3 were selected as primary vectors. The EGFR mutation model constructed with SVM algorithms had an accuracy, sensitivity, and specificity for determining the mutation status of brain metastases of 0.879, 0.886, and 0.875, respectively. Furthermore, the replicability of our model was confirmed by testing 100 random combinations of input values. The LDA-SVM-based model developed in this study could predict the EGFR status of brain metastases in this

  14. LDA-SVM-based EGFR mutation model for NSCLC brain metastases: an observational study.

    Science.gov (United States)

    Hu, Nan; Wang, Ge; Wu, Yu-Hao; Chen, Shi-Feng; Liu, Guo-Dong; Chen, Chuan; Wang, Dong; He, Zhong-Shi; Yang, Xue-Qin; He, Yong; Xiao, Hua-Liang; Huang, Ding-De; Xiong, Kun-Lin; Wu, Yan; Huang, Ming; Yang, Zhen-Zhou

    2015-02-01

    Epidermal growth factor receptor (EGFR) activating mutations are a predictor of tyrosine kinase inhibitor effectiveness in the treatment of non-small-cell lung cancer (NSCLC). The objective of this study is to build a model for predicting the EGFR mutation status of brain metastasis in patients with NSCLC. Observation and model set-up. This study was conducted between January 2003 and December 2011 in 6 medical centers in Southwest China. The study included 31 NSCLC patients with brain metastases. Eligibility requirements were histological proof of NSCLC, as well as sufficient quantity of paraffin-embedded lung and brain metastases specimens for EGFR mutation detection. The linear discriminant analysis (LDA) method was used for analyzing the dimensional reduction of clinical features, and a support vector machine (SVM) algorithm was employed to generate an EGFR mutation model for NSCLC brain metastases. Training-testing-validation (3 : 1 : 1) processes were applied to find the best fit in 12 patients (validation test set) with NSCLC and brain metastases treated with a tyrosine kinase inhibitor and whole-brain radiotherapy. Primary and secondary outcome measures: EGFR mutation analysis in patients with NSCLC and brain metastases and the development of a LDA-SVM-based EGFR mutation model for NSCLC brain metastases patients. EGFR mutation discordance between the primary lung tumor and brain metastases was found in 5 patients. Using LDA, 13 clinical features were transformed into 9 characteristics, and 3 were selected as primary vectors. The EGFR mutation model constructed with SVM algorithms had an accuracy, sensitivity, and specificity for determining the mutation status of brain metastases of 0.879, 0.886, and 0.875, respectively. Furthermore, the replicability of our model was confirmed by testing 100 random combinations of input values. The LDA-SVM-based model developed in this study could predict the EGFR status of brain metastases in this small cohort of

  15. Evaluation of new reference genes in papaya for accurate transcript normalization under different experimental conditions.

    Science.gov (United States)

    Zhu, Xiaoyang; Li, Xueping; Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen

    2012-01-01

    Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions.

  16. Evaluation of new reference genes in papaya for accurate transcript normalization under different experimental conditions.

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhu

    Full Text Available Real-time reverse transcription PCR (RT-qPCR is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A, TBP1 (TATA binding protein 1 and TBP2 (TATA binding protein 2 genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2, 18S rRNA (18S ribosomal RNA and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental

  17. Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance

    OpenAIRE

    Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro; Lee, Sang Hee

    2015-01-01

    Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which we...

  18. PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons

    Directory of Open Access Journals (Sweden)

    Yi Long

    2016-09-01

    Full Text Available Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM optimized by particle swarm optimization (PSO to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz, a three-layer wavelet packet analysis (WPA is used for feature extraction, after which, the kernel principal component analysis (kPCA is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance.

  19. PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.

    Science.gov (United States)

    Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei

    2016-01-01

    Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160

  20. PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons

    Science.gov (United States)

    Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei

    2016-01-01

    Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160

  1. Reference genes for accurate transcript normalization in citrus genotypes under different experimental conditions.

    Directory of Open Access Journals (Sweden)

    Valéria Mafra

    Full Text Available Real-time reverse transcription PCR (RT-qPCR has emerged as an accurate and widely used technique for expression profiling of selected genes. However, obtaining reliable measurements depends on the selection of appropriate reference genes for gene expression normalization. The aim of this work was to assess the expression stability of 15 candidate genes to determine which set of reference genes is best suited for transcript normalization in citrus in different tissues and organs and leaves challenged with five pathogens (Alternaria alternata, Phytophthora parasitica, Xylella fastidiosa and Candidatus Liberibacter asiaticus. We tested traditional genes used for transcript normalization in citrus and orthologs of Arabidopsis thaliana genes described as superior reference genes based on transcriptome data. geNorm and NormFinder algorithms were used to find the best reference genes to normalize all samples and conditions tested. Additionally, each biotic stress was individually analyzed by geNorm. In general, FBOX (encoding a member of the F-box family and GAPC2 (GAPDH was the most stable candidate gene set assessed under the different conditions and subsets tested, while CYP (cyclophilin, TUB (tubulin and CtP (cathepsin were the least stably expressed genes found. Validation of the best suitable reference genes for normalizing the expression level of the WRKY70 transcription factor in leaves infected with Candidatus Liberibacter asiaticus showed that arbitrary use of reference genes without previous testing could lead to misinterpretation of data. Our results revealed FBOX, SAND (a SAND family protein, GAPC2 and UPL7 (ubiquitin protein ligase 7 to be superior reference genes, and we recommend their use in studies of gene expression in citrus species and relatives. This work constitutes the first systematic analysis for the selection of superior reference genes for transcript normalization in different citrus organs and under biotic stress.

  2. An accurate DNA marker assay for stem rust resistance gene Sr2 in wheat

    Science.gov (United States)

    The stem rust resistance gene Sr2 has provided broad-spectrum protection against stem rust (Puccinia graminis) since its wide spread deployment in wheat from the 1940s. Because Sr2 confers partial resistance which is difficult to select under field conditions, a DNA marker is desirable that accurate...

  3. Modeling the milling tool wear by using an evolutionary SVM-based model from milling runs experimental data

    Science.gov (United States)

    Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade

    2015-12-01

    The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.

  4. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    Directory of Open Access Journals (Sweden)

    Ruben Ruiz-Gonzalez

    2014-11-01

    Full Text Available The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  5. Hybrid PSO–SVM-based method for forecasting of the remaining useful life for aircraft engines and evaluation of its reliability

    International Nuclear Information System (INIS)

    The present paper describes a hybrid PSO–SVM-based model for the prediction of the remaining useful life of aircraft engines. The proposed hybrid model combines support vector machines (SVMs), which have been successfully adopted for regression problems, with the particle swarm optimization (PSO) technique. This optimization technique involves kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not been yet widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid PSO–SVM-based model from the remaining measured parameters (input variables) for aircraft engines with success. A coefficient of determination equal to 0.9034 was obtained when this hybrid PSO–RBF–SVM-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. One of the main advantages of this predictive model is that it does not require information about the previous operation states of the engine. Finally, the main conclusions of this study are exposed. - Highlights: • A hybrid PSO–SVM-based model is built as a predictive model of the RUL values for aircraft engines. • The remaining physical–chemical variables in this process are studied in depth. • The obtained regression accuracy of our method is about 95%. • The results show that PSO–SVM-based model can assist in the diagnosis of the RUL values with accuracy

  6. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    Directory of Open Access Journals (Sweden)

    XueYan Li

    Full Text Available Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes.

  7. Study of SVM-Based Incremental Learning for User Adaptation%基于SVM增量学习的用户适应性研究

    Institute of Scientific and Technical Information of China (English)

    彭彬彬; 金翔宇; 徐晓刚; 孙正兴

    2003-01-01

    User adaptation is a critical and important problem. For users' specialization, such as Handwriting, Voice,Drawing Styles, the system is hard to adapt to all users. SVM-based incremental learning can find the most basic fea-ture of different users and cast away the special user's character, so this method can adapt the different users withoutover fitting. In this paper, the repetitive learning strategy and other two incremental learning algorithms are presentedfor comparison. Based on theoretical analysis and experimental results, we draw the conclusion that SVM-based incre-mental learning can solve the user conflict problem.

  8. Gene expression signatures of radiation response are specific, durable and accurate in mice and humans.

    Directory of Open Access Journals (Sweden)

    Sarah K Meadows

    Full Text Available BACKGROUND: Previous work has demonstrated the potential for peripheral blood (PB gene expression profiling for the detection of disease or environmental exposures. METHODS AND FINDINGS: We have sought to determine the impact of several variables on the PB gene expression profile of an environmental exposure, ionizing radiation, and to determine the specificity of the PB signature of radiation versus other genotoxic stresses. Neither genotype differences nor the time of PB sampling caused any lessening of the accuracy of PB signatures to predict radiation exposure, but sex difference did influence the accuracy of the prediction of radiation exposure at the lowest level (50 cGy. A PB signature of sepsis was also generated and both the PB signature of radiation and the PB signature of sepsis were found to be 100% specific at distinguishing irradiated from septic animals. We also identified human PB signatures of radiation exposure and chemotherapy treatment which distinguished irradiated patients and chemotherapy-treated individuals within a heterogeneous population with accuracies of 90% and 81%, respectively. CONCLUSIONS: We conclude that PB gene expression profiles can be identified in mice and humans that are accurate in predicting medical conditions, are specific to each condition and remain highly accurate over time.

  9. In-Vivo Imaging of Cell Migration Using Contrast Enhanced MRI and SVM Based Post-Processing.

    Directory of Open Access Journals (Sweden)

    Christian Weis

    Full Text Available The migration of cells within a living organism can be observed with magnetic resonance imaging (MRI in combination with iron oxide nanoparticles as an intracellular contrast agent. This method, however, suffers from low sensitivity and specificty. Here, we developed a quantitative non-invasive in-vivo cell localization method using contrast enhanced multiparametric MRI and support vector machines (SVM based post-processing. Imaging phantoms consisting of agarose with compartments containing different concentrations of cancer cells labeled with iron oxide nanoparticles were used to train and evaluate the SVM for cell localization. From the magnitude and phase data acquired with a series of T2*-weighted gradient-echo scans at different echo-times, we extracted features that are characteristic for the presence of superparamagnetic nanoparticles, in particular hyper- and hypointensities, relaxation rates, short-range phase perturbations, and perturbation dynamics. High detection quality was achieved by SVM analysis of the multiparametric feature-space. The in-vivo applicability was validated in animal studies. The SVM detected the presence of iron oxide nanoparticles in the imaging phantoms with high specificity and sensitivity with a detection limit of 30 labeled cells per mm3, corresponding to 19 μM of iron oxide. As proof-of-concept, we applied the method to follow the migration of labeled cancer cells injected in rats. The combination of iron oxide labeled cells, multiparametric MRI and a SVM based post processing provides high spatial resolution, specificity, and sensitivity, and is therefore suitable for non-invasive in-vivo cell detection and cell migration studies over prolonged time periods.

  10. svmPRAT: SVM-based Protein Residue Annotation Toolkit

    OpenAIRE

    Kauffman Christopher; Rangwala Huzefa; Karypis George

    2009-01-01

    Abstract Background Over the last decade several prediction methods have been developed for determining the structural and functional properties of individual protein residues using sequence and sequence-derived information. Most of these methods are based on support vector machines as they provide accurate and generalizable prediction models. Results We present a general purpose protein residue annotation toolkit (svmPRAT) to allow biologists to formulate residue-wise prediction problems. sv...

  11. A PSO-SVM-based 24 Hours Power Load Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yu Xiaoxu

    2015-01-01

    Full Text Available In order to improve the drawbacks of over-fitting and easily get stuck into local extremes of BACK propagation Neural Network, a new method of combination of wavelet transform and PSO-SVM (Particle Swarm Optimization- Support Vector Machine power load forecasting model is proposed. By employing wave-let transform, the authors decompose the time sequences of power load into high-frequency and low-frequency parts, namely the low-frequency part forecast with this model and the high-frequency part forecast with weighted average method. With PSO, which is a heuristic bionic optimization algorithm, the authors figure out the prefer-able parameters of SVM, and the model proposed in this paper is tested to be more accurately to forecast the 24h power load than BP model.

  12. A Statistical Parameter Analysis and SVM Based Fault Diagnosis Strategy for Dynamically Tuned Gyroscopes

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Gyro's fault diagnosis plays a critical role in inertia navigation systems for higher reliability and precision. A new fault diagnosis strategy based on the statistical parameter analysis (SPA) and support vector machine(SVM) classification model was proposed for dynamically tuned gyroscopes (DTG). The SPA, a kind of time domain analysis approach, was introduced to compute a set of statistical parameters of vibration signal as the state features of DTG, with which the SVM model, a novel learning machine based on statistical learning theory (SLT), was applied and constructed to train and identify the working state of DTG. The experimental results verify that the proposed diagnostic strategy can simply and effectively extract the state features of DTG, and it outperforms the radial-basis function (RBF) neural network based diagnostic method and can more reliably and accurately diagnose the working state of DTG.

  13. Optimal training dataset composition for SVM-based, age-independent, automated epileptic seizure detection.

    Science.gov (United States)

    Bogaarts, J G; Gommer, E D; Hilkman, D M W; van Kranen-Mastenbroek, V H J M; Reulen, J P H

    2016-08-01

    -independent seizure detection is possible by training one classifier on EEG data from both neonatal and adult patients. Furthermore, our results indicate that for accurate age-independent seizure detection, it is important that EEG data from each age category are used for classifier training. This is particularly important for neonatal seizure detection. Our results underline the under-appreciated importance of training dataset composition with respect to accurate age-independent seizure detection. PMID:27032931

  14. l2 Multiple Kernel Fuzzy SVM-Based Data Fusion for Improving Peptide Identification.

    Science.gov (United States)

    Jian, Ling; Xia, Zhonghang; Niu, Xinnan; Liang, Xijun; Samir, Parimal; Link, Andrew J

    2016-01-01

    SEQUEST is a database-searching engine, which calculates the correlation score between observed spectrum and theoretical spectrum deduced from protein sequences stored in a flat text file, even though it is not a relational and object-oriental repository. Nevertheless, the SEQUEST score functions fail to discriminate between true and false PSMs accurately. Some approaches, such as PeptideProphet and Percolator, have been proposed to address the task of distinguishing true and false PSMs. However, most of these methods employ time-consuming learning algorithms to validate peptide assignments [1] . In this paper, we propose a fast algorithm for validating peptide identification by incorporating heterogeneous information from SEQUEST scores and peptide digested knowledge. To automate the peptide identification process and incorporate additional information, we employ l2 multiple kernel learning (MKL) to implement the current peptide identification task. Results on experimental datasets indicate that compared with state-of-the-art methods, i.e., PeptideProphet and Percolator, our data fusing strategy has comparable performance but reduces the running time significantly. PMID:26394437

  15. Online Adaptive Error Compensation SVM-Based Sliding Mode Control of an Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Kaijia Xue

    2016-01-01

    Full Text Available Unmanned Aerial Vehicle (UAV is a nonlinear dynamic system with uncertainties and noises. Therefore, an appropriate control system has an obligation to ensure the stabilization and navigation of UAV. This paper mainly discusses the control problem of quad-rotor UAV system, which is influenced by unknown parameters and noises. Besides, a sliding mode control based on online adaptive error compensation support vector machine (SVM is proposed for stabilizing quad-rotor UAV system. Sliding mode controller is established through analyzing quad-rotor dynamics model in which the unknown parameters are computed by offline SVM. During this process, the online adaptive error compensation SVM method is applied in this paper. As modeling errors and noises both exist in the process of flight, the offline SVM one-time mode cannot predict the uncertainties and noises accurately. The control law is adjusted in real-time by introducing new training sample data to online adaptive SVM in the control process, so that the stability and robustness of flight are ensured. It can be demonstrated through the simulation experiments that the UAV that joined online adaptive SVM can track the changing path faster according to its dynamic model. Consequently, the proposed method that is proved has the better control effect in the UAV system.

  16. A Genetic Algorithm Optimized Decision Tree-SVM based Stock Market Trend Prediction System

    Directory of Open Access Journals (Sweden)

    Binoy B. Nair

    2010-12-01

    Full Text Available Prediction of stock market trends has been an area of great interest both to researchers attempting to uncover the information hidden in the stock market data and for those who wish to profit by trading stocks. The extremely nonlinear nature of the stock market data makes it very difficult to design a system that can predict the future direction of the stock market with sufficient accuracy. This work presents a data mining based stock market trend prediction system, which produces highly accurate stock market forecasts. The proposed system is a genetic algorithm optimized decision tree-support vector machine (SVM hybrid, which can predict one-day-ahead trends in stockmarkets. The uniqueness of the proposed system lies in the use ofthe hybrid system which can adapt itself to the changing market conditions and in the fact that while most of the attempts at stockmarket trend prediction have approached it as a regression problem, present study converts the trend prediction task into a classification problem, thus improving the prediction accuracysignificantly. Performance of the proposed hybrid system isvalidated on the historical time series data from the Bombaystock exchange sensitive index (BSE-Sensex. The system performance is then compared to that of an artificial neural network (ANN based system and a naïve Bayes based system. It is found that the trend prediction accuracy is highest for the hybrid system and the genetic algorithm optimized decision tree- SVM hybrid system outperforms both the artificial neural network and the naïve bayes based trend prediction systems.

  17. Importance of Housekeeping gene selection for accurate RT-qPCR in a wound healing model

    OpenAIRE

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A.

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for RT-qPCR analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expressio...

  18. A simple and accurate two-step long DNA sequences synthesis strategy to improve heterologous gene expression in pichia.

    Directory of Open Access Journals (Sweden)

    Jiang-Ke Yang

    Full Text Available In vitro gene chemical synthesis is a powerful tool to improve the expression of gene in heterologous system. In this study, a two-step gene synthesis strategy that combines an assembly PCR and an overlap extension PCR (AOE was developed. In this strategy, the chemically synthesized oligonucleotides were assembled into several 200-500 bp fragments with 20-25 bp overlap at each end by assembly PCR, and then an overlap extension PCR was conducted to assemble all these fragments into a full length DNA sequence. Using this method, we de novo designed and optimized the codon of Rhizopus oryzae lipase gene ROL (810 bp and Aspergillus niger phytase gene phyA (1404 bp. Compared with the original ROL gene and phyA gene, the codon-optimized genes expressed at a significantly higher level in yeasts after methanol induction. We believe this AOE method to be of special interest as it is simple, accurate and has no limitation with respect to the size of the gene to be synthesized. Combined with de novo design, this method allows the rapid synthesis of a gene optimized for expression in the system of choice and production of sufficient biological material for molecular characterization and biotechnological application.

  19. Normalization with genes encoding ribosomal proteins but not GAPDH provides an accurate quantification of gene expressions in neuronal differentiation of PC12 cells

    Directory of Open Access Journals (Sweden)

    Lim Qing-En

    2010-01-01

    Full Text Available Abstract Background Gene regulation at transcript level can provide a good indication of the complex signaling mechanisms underlying physiological and pathological processes. Transcriptomic methods such as microarray and quantitative real-time PCR require stable reference genes for accurate normalization of gene expression. Some but not all studies have shown that housekeeping genes (HGKs, β-actin (ACTB and glyceraldehyde-3-phosphate dehydrogenase (GAPDH, which are routinely used for normalization, may vary significantly depending on the cell/tissue type and experimental conditions. It is currently unclear if these genes are stably expressed in cells undergoing drastic morphological changes during neuronal differentiation. Recent meta-analysis of microarray datasets showed that some but not all of the ribosomal protein genes are stably expressed. To test the hypothesis that some ribosomal protein genes can serve as reference genes for neuronal differentiation, a genome-wide analysis was performed and putative reference genes were identified based on stability of expressions. The stabilities of these potential reference genes were then analyzed by reverse transcription quantitative real-time PCR in six differentiation conditions. Results Twenty stably expressed genes, including thirteen ribosomal protein genes, were selected from microarray analysis of the gene expression profiles of GDNF and NGF induced differentiation of PC12 cells. The expression levels of these candidate genes as well as ACTB and GAPDH were further analyzed by reverse transcription quantitative real-time PCR in PC12 cells differentiated with a variety of stimuli including NGF, GDNF, Forskolin, KCl and ROCK inhibitor, Y27632. The performances of these candidate genes as stable reference genes were evaluated with two independent statistical approaches, geNorm and NormFinder. Conclusions The ribosomal protein genes, RPL19 and RPL29, were identified as suitable reference genes

  20. 基于支持向量机无限集成学习方法的遥感图像分类%Remotely sensed imagery classification by SVM-based Infinite Ensemble Learning method

    Institute of Scientific and Technical Information of China (English)

    杨娜; 秦志远; 张俊

    2013-01-01

    基于支持向量机的无限集成学习方法(SVM-based IEL)是机器学习领域新兴起的一种集成学习方法.本文将SVM-based IEL引入遥感图像的分类领域,并同时将SVM、Bagging、AdaBoost和SVM-based IEL等方法应用于遥感图像分类.实验表明:Bagging方法可以提高遥感图像的分类精度,而AdaBoost却降低了遥感图像的分类精度;同时,与SVM、有限集成的学习方法相比,SVM-based IEL方法具有可以显著地提高遥感图像的分类精度、分类效率的优势.%Support-vector-machines-based Infinite Ensemble Learning method ( SVM-based IEL) is one of the ensemble learning methods in the field of machine learning. In this paper, the SVM-based IEL was applied to the classification of remotely sensed imagery besides classic ensemble learning methods such as Bagging, AdaBoost and SVM etc. SVM was taken as the base classifier in Bagging, AdaBoost The experiments showed that the classic ensemble learning methods have different performances compared to SVM. In detail , the Bagging was capable of enhancing the classification accuracy but the AdaBoost was decreasing the classification accuracy. Furthermore, the experiments suggested that compared to SVM and classic ensemble learning methods, SVM-based IEL has many merits such as increasing both of the classification accuracy and classification efficiency.

  1. An endometrial gene expression signature accurately predicts recurrent implantation failure after IVF

    Science.gov (United States)

    Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.

    2016-01-01

    The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113

  2. 基于GA-SVM的汽车追尾预测方法研究%Research on GA-SVM-based Rear-end Collision Prediction Approach

    Institute of Scientific and Technical Information of China (English)

    胡满江; 葛如海; 苏清祖

    2012-01-01

    提出了一种基于GA-SVM的汽车追尾预测方法.该方法选取行车间距、后车车速、前后车速差、汽车制动时间、制动减速度等5个因素作为预测模型的输入指标,选取追尾概率作为输出向量,通过建立SVM预测模型,并用GA进行参数优化,对汽车追尾概率进行预测.通过仿真值与预测值对比,证明了该方法的准确性.%In this paper, a GA-SVM-based approach is proposed to predict rear-end collision. A SVM-based prediction model is built and parameterized with GA to predict the probability of rear-end collision, with five factors including vehicle distance, following vehicle speed, speed difference between the front vehicle and the following vehicle, braking time and braking deceleration selected as input indexes and probability of collision as output vector. Through the simulation value and predicted value contrast, this method proves the accuracy.

  3. Comprehensive and accurate mutation scanning of the CFTR gene by two-dimensional DNA electrophoresis

    NARCIS (Netherlands)

    Wu, Y; Hofstra, RMW; Scheffer, H; Uitterlinden, AG; Mullaart, E; Buys, CHCM; Vijg, J

    1996-01-01

    The large number of possible disease causing mutations in the 27 exons of the cystic fibrosis transmembrane conductance regulator (CFTR) gene has severely limited direct diagnosis of cystic fibrosis (CF) patients and carriers by mutation detection. Here we show that in principle testing for mutation

  4. Robust TLR4-induced gene expression patterns are not an accurate indicator of human immunity

    Directory of Open Access Journals (Sweden)

    Davidson Donald J

    2010-01-01

    Full Text Available Abstract Background Activation of Toll-like receptors (TLRs is widely accepted as an essential event for defence against infection. Many TLRs utilize a common signalling pathway that relies on activation of the kinase IRAK4 and the transcription factor NFκB for the rapid expression of immunity genes. Methods 21 K DNA microarray technology was used to evaluate LPS-induced (TLR4 gene responses in blood monocytes from a child with an IRAK4-deficiency. In vitro responsiveness to LPS was confirmed by real-time PCR and ELISA and compared to the clinical predisposition of the child and IRAK4-deficient mice to Gram negative infection. Results We demonstrated that the vast majority of LPS-responsive genes in IRAK4-deficient monocytes were greatly suppressed, an observation that is consistent with the described role for IRAK4 as an essential component of TLR4 signalling. The severely impaired response to LPS, however, is inconsistent with a remarkably low incidence of Gram negative infections observed in this child and other children with IRAK4-deficiency. This unpredicted clinical phenotype was validated by demonstrating that IRAK4-deficient mice had a similar resistance to infection with Gram negative S. typhimurium as wildtype mice. A number of immunity genes, such as chemokines, were expressed at normal levels in human IRAK4-deficient monocytes, indicating that particular IRAK4-independent elements within the repertoire of TLR4-induced responses are expressed. Conclusions Sufficient defence to Gram negative immunity does not require IRAK4 or a robust, 'classic' inflammatory and immune response.

  5. Accurate data processing improves the reliability of Affymetrix gene expression profiles from FFPE samples.

    Directory of Open Access Journals (Sweden)

    Maurizio Callari

    Full Text Available Formalin fixed paraffin-embedded (FFPE tumor specimens are the conventionally archived material in clinical practice, representing an invaluable tissue source for biomarkers development, validation and routine implementation. For many prospective clinical trials, this material has been collected allowing for a prospective-retrospective study design which represents a successful strategy to define clinical utility for candidate markers. Gene expression data can be obtained even from FFPE specimens with the broadly used Affymetrix HG-U133 Plus 2.0 microarray platform. Nevertheless, important major discrepancies remain in expression data obtained from FFPE compared to fresh-frozen samples, prompting the need for appropriate data processing which could help to obtain more consistent results in downstream analyses. In a publicly available dataset of matched frozen and FFPE expression data, the performances of different normalization methods and specifically designed Chip Description Files (CDFs were compared. The use of an alternative CDFs together with fRMA normalization significantly improved frozen-FFPE sample correlations, frozen-FFPE probeset correlations and agreement of differential analysis between different tumor subtypes. The relevance of our optimized data processing was assessed and validated using two independent datasets. In this study we demonstrated that an appropriate data processing can significantly improve the reliability of gene expression data derived from FFPE tissues using the standard Affymetrix platform. Tools for the implementation of our data processing algorithm are made publicly available at http://www.biocut.unito.it/cdf-ffpe/.

  6. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    Science.gov (United States)

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  7. Toward more accurate ancestral protein genotype-phenotype reconstructions with the use of species tree-aware gene trees.

    Science.gov (United States)

    Groussin, Mathieu; Hobbs, Joanne K; Szöllősi, Gergely J; Gribaldo, Simonetta; Arcus, Vickery L; Gouy, Manolo

    2015-01-01

    The resurrection of ancestral proteins provides direct insight into how natural selection has shaped proteins found in nature. By tracing substitutions along a gene phylogeny, ancestral proteins can be reconstructed in silico and subsequently synthesized in vitro. This elegant strategy reveals the complex mechanisms responsible for the evolution of protein functions and structures. However, to date, all protein resurrection studies have used simplistic approaches for ancestral sequence reconstruction (ASR), including the assumption that a single sequence alignment alone is sufficient to accurately reconstruct the history of the gene family. The impact of such shortcuts on conclusions about ancestral functions has not been investigated. Here, we show with simulations that utilizing information on species history using a model that accounts for the duplication, horizontal transfer, and loss (DTL) of genes statistically increases ASR accuracy. This underscores the importance of the tree topology in the inference of putative ancestors. We validate our in silico predictions using in vitro resurrection of the LeuB enzyme for the ancestor of the Firmicutes, a major and ancient bacterial phylum. With this particular protein, our experimental results demonstrate that information on the species phylogeny results in a biochemically more realistic and kinetically more stable ancestral protein. Additional resurrection experiments with different proteins are necessary to statistically quantify the impact of using species tree-aware gene trees on ancestral protein phenotypes. Nonetheless, our results suggest the need for incorporating both sequence and DTL information in future studies of protein resurrections to accurately define the genotype-phenotype space in which proteins diversify.

  8. ZCURVE 3.0: identify prokaryotic genes with higher accuracy as well as automatically and accurately select essential genes.

    Science.gov (United States)

    Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao

    2015-07-01

    In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions.

  9. Staphylococcus aureus interaction with phospholipid vesicles--a new method to accurately determine accessory gene regulator (agr activity.

    Directory of Open Access Journals (Sweden)

    Maisem Laabei

    Full Text Available The staphylococcal accessory gene regulatory (agr operon is a well-characterised global regulatory element that is important in the control of virulence gene expression for Staphylococcus aureus, a major human pathogen. Hence, accurate and sensitive measurement of Agr activity is central in understanding the virulence potential of Staphylococcus aureus, especially in the context of Agr dysfunction, which has been linked with persistent bacteraemia and reduced susceptibility to glycopeptide antibiotics. Agr function is typically measured using a synergistic haemolysis CAMP assay, which is believe to report on the level of expression of one of the translated products of the agr locus, delta toxin. In this study we develop a vesicle lysis test (VLT that is specific to small amphipathic peptides, most notably delta and Phenol Soluble Modulin (PSM toxins. To determine the accuracy of this VLT method in assaying Agr activity, we compared it to the CAMP assay using 89 clinical Staphylococcus aureus isolates. Of the 89 isolates, 16 were designated as having dysfunctional Agr systems by the CAMP assay, whereas only three were designated as such by VLT. Molecular analysis demonstrated that of these 16 isolates, the 13 designated as having a functional Agr system by VLT transcribed rnaIII and secreted delta toxin, demonstrating they have a functional Agr system despite the results of the CAMP assay. The agr locus of all 16 isolates was sequenced, and only the 3 designated as having a dysfunctional Agr system contained mutations, explaining their Agr dysfunction. Given the potentially important link between Agr dysfunction and clinical outcome, we have developed an assay that determines this more accurately than the conventional CAMP assay.

  10. Microcalcification detection in full-field digital mammograms with PFCM clustering and weighted SVM-based method

    Science.gov (United States)

    Liu, Xiaoming; Mei, Ming; Liu, Jun; Hu, Wei

    2015-12-01

    Clustered microcalcifications (MCs) in mammograms are an important early sign of breast cancer in women. Their accurate detection is important in computer-aided detection (CADe). In this paper, we integrated the possibilistic fuzzy c-means (PFCM) clustering algorithm and weighted support vector machine (WSVM) for the detection of MC clusters in full-field digital mammograms (FFDM). For each image, suspicious MC regions are extracted with region growing and active contour segmentation. Then geometry and texture features are extracted for each suspicious MC, a mutual information-based supervised criterion is used to select important features, and PFCM is applied to cluster the samples into two clusters. Weights of the samples are calculated based on possibilities and typicality values from the PFCM, and the ground truth labels. A weighted nonlinear SVM is trained. During the test process, when an unknown image is presented, suspicious regions are located with the segmentation step, selected features are extracted, and the suspicious MC regions are classified as containing MC or not by the trained weighted nonlinear SVM. Finally, the MC regions are analyzed with spatial information to locate MC clusters. The proposed method is evaluated using a database of 410 clinical mammograms and compared with a standard unweighted support vector machine (SVM) classifier. The detection performance is evaluated using response receiver operating (ROC) curves and free-response receiver operating characteristic (FROC) curves. The proposed method obtained an area under the ROC curve of 0.8676, while the standard SVM obtained an area of 0.8268 for MC detection. For MC cluster detection, the proposed method obtained a high sensitivity of 92 % with a false-positive rate of 2.3 clusters/image, and it is also better than standard SVM with 4.7 false-positive clusters/image at the same sensitivity.

  11. VICMpred: An SVM-based Method for the Prediction of Functional Proteins of Gram-negative Bacteria Using Amino Acid Patterns and Composition

    Institute of Scientific and Technical Information of China (English)

    Sudipto Saha; G.P.S. Raghava

    2006-01-01

    In this study, an attempt has been made to predict the major functions of gramnegative bacterial proteins from their amino acid sequences. The dataset used for training and testing consists of 670 non-redundant gram-negative bacterial proteins (255 ofcellular process, 60 of information molecules, 285 of metabolism, and 70 of virulence factors). First we developed an SVM-based method using amino acid and dipeptide composition and achieved the overall accuracy of 52.39% and 47.01%, respectively. We introduced a new concept for the classification of proteins based on tetrapeptides, in which we identified the unique tetrapeptides significantly found in a class of proteins. These tetrapeptides were used as the input feature for predicting the function of a protein and achieved the overall accuracy of 68.66%. We also developed a hybrid method in which the tetrapeptide information was used with amino acid composition and achieved the overall accuracy of 70.75%. A five-fold cross validation was used to evaluate the performance of these methods. The web server VICMpred has been developed for predicting the function of gram-negative bacterial proteins (http://www.imtech.res.in/raghava/vicmpred/).

  12. A multi-class SVM based on FCOWA-ER%一种基于FCOWA-ER的SVM多分类方法

    Institute of Scientific and Technical Information of China (English)

    刘卫兵; 杨艺; 韩德强

    2015-01-01

    支持向量机(SVM)在处理多分类问题时,需要综合利用多个二分类SVM,以获得多分类判决结果。传统多分类拓展方法使用的是SVM的硬输出,在一定程度上造成了信息的丢失。为了更加充分地利用信息,提出一种基于证据推理-多属性决策方法的SVM多分类算法,将多分类问题视为一个多属性决策问题,使用证据推理-模糊谨慎有序加权平均方法(FCOWA-ER)实现SVM的多分类判决。实验结果表明,所提出方法可以获得更高的分类精度。%Multiple bi-class SVMs are used together to obtain the final decision when the support vector machine(SVM) is applied to multi-class classification problems. The conventional methods of applying the SVM to multiple classification tasks are all based on the hard output of SVM, which can bring the loss of information to some extent. Therefore, a multi-class SVM based on an evidential reasoning based multiple attribute decision approach is proposed to use more information. The multi-class classification problem is modelled as a multi-criteria decision making problem. Then a fuzzy-cautious OWA(ordered weighted averaging) approach with evidential reasoning(FCOWA-ER) is used to implement multi-class classification and obtain the final decision. The simulation results show that the method proposed has better accuracy compared with conventional methods.

  13. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction.

    Science.gov (United States)

    Motomura, Kaori; Inoue, Kimiko; Ogura, Atsuo

    2016-06-17

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  14. Accurate breast cancer diagnosis through real-time PCR her-2 gene quantification using immunohistochemically-identified biopsies

    OpenAIRE

    MENDOZA, GRETEL; PORTILLO, AMELIA; Olmos-Soto, Jorge

    2012-01-01

    her-2 gene amplification and its overexpression in breast cancer cells is directly associated with aggressive clinical behavior. The her-2 gene and its Her-2 protein have been utilized for disease diagnosis and as a predictive marker for treatment response to the antibody herceptin. Fluorescent in situ hybridization (FISH) and immunohistochemistry (IHC) are the most common FDA-approved methodologies involving gene and protein quantification, respectively. False positive or negative her-2/Her-...

  15. A scalable and accurate targeted gene assembly tool (SAT-Assembler for next-generation sequencing data.

    Directory of Open Access Journals (Sweden)

    Yuan Zhang

    2014-08-01

    Full Text Available Gene assembly, which recovers gene segments from short reads, is an important step in functional analysis of next-generation sequencing data. Lacking quality reference genomes, de novo assembly is commonly used for RNA-Seq data of non-model organisms and metagenomic data. However, heterogeneous sequence coverage caused by heterogeneous expression or species abundance, similarity between isoforms or homologous genes, and large data size all pose challenges to de novo assembly. As a result, existing assembly tools tend to output fragmented contigs or chimeric contigs, or have high memory footprint. In this work, we introduce a targeted gene assembly program SAT-Assembler, which aims to recover gene families of particular interest to biologists. It addresses the above challenges by conducting family-specific homology search, homology-guided overlap graph construction, and careful graph traversal. It can be applied to both RNA-Seq and metagenomic data. Our experimental results on an Arabidopsis RNA-Seq data set and two metagenomic data sets show that SAT-Assembler has smaller memory usage, comparable or better gene coverage, and lower chimera rate for assembling a set of genes from one or multiple pathways compared with other assembly tools. Moreover, the family-specific design and rapid homology search allow SAT-Assembler to be naturally compatible with parallel computing platforms. The source code of SAT-Assembler is available at https://sourceforge.net/projects/sat-assembler/. The data sets and experimental settings can be found in supplementary material.

  16. Identification of a 251 gene expression signature that can accurately detect M. tuberculosis in patients with and without HIV co-infection.

    Directory of Open Access Journals (Sweden)

    Noor Dawany

    Full Text Available BACKGROUND: Co-infection with tuberculosis (TB is the leading cause of death in HIV-infected individuals. However, diagnosis of TB, especially in the presence of an HIV co-infection, can be limiting due to the high inaccuracy associated with the use of conventional diagnostic methods. Here we report a gene signature that can identify a tuberculosis infection in patients co-infected with HIV as well as in the absence of HIV. METHODS: We analyzed global gene expression data from peripheral blood mononuclear cell (PBMC samples of patients that were either mono-infected with HIV or co-infected with HIV/TB and used support vector machines to identify a gene signature that can distinguish between the two classes. We then validated our results using publically available gene expression data from patients mono-infected with TB. RESULTS: Our analysis successfully identified a 251-gene signature that accurately distinguishes patients co-infected with HIV/TB from those infected with HIV only, with an overall accuracy of 81.4% (sensitivity = 76.2%, specificity = 86.4%. Furthermore, we show that our 251-gene signature can also accurately distinguish patients with active TB in the absence of an HIV infection from both patients with a latent TB infection and healthy controls (88.9-94.7% accuracy; 69.2-90% sensitivity and 90.3-100% specificity. We also demonstrate that the expression levels of the 251-gene signature diminish as a correlate of the length of TB treatment. CONCLUSIONS: A 251-gene signature is described to (a detect TB in the presence or absence of an HIV co-infection, and (b assess response to treatment following anti-TB therapy.

  17. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.)

    OpenAIRE

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2014-01-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integra...

  18. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    Science.gov (United States)

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit.

  19. Gene Expression Ratios Lead to Accurate and Translatable Predictors of DR5 Agonism across Multiple Tumor Lineages.

    Directory of Open Access Journals (Sweden)

    Anupama Reddy

    Full Text Available Death Receptor 5 (DR5 agonists demonstrate anti-tumor activity in preclinical models but have yet to demonstrate robust clinical responses. A key limitation may be the lack of patient selection strategies to identify those most likely to respond to treatment. To overcome this limitation, we screened a DR5 agonist Nanobody across >600 cell lines representing 21 tumor lineages and assessed molecular features associated with response. High expression of DR5 and Casp8 were significantly associated with sensitivity, but their expression thresholds were difficult to translate due to low dynamic ranges. To address the translational challenge of establishing thresholds of gene expression, we developed a classifier based on ratios of genes that predicted response across lineages. The ratio classifier outperformed the DR5+Casp8 classifier, as well as standard approaches for feature selection and classification using genes, instead of ratios. This classifier was independently validated using 11 primary patient-derived pancreatic xenograft models showing perfect predictions as well as a striking linearity between prediction probability and anti-tumor response. A network analysis of the genes in the ratio classifier captured important biological relationships mediating drug response, specifically identifying key positive and negative regulators of DR5 mediated apoptosis, including DR5, CASP8, BID, cFLIP, XIAP and PEA15. Importantly, the ratio classifier shows translatability across gene expression platforms (from Affymetrix microarrays to RNA-seq and across model systems (in vitro to in vivo. Our approach of using gene expression ratios presents a robust and novel method for constructing translatable biomarkers of compound response, which can also probe the underlying biology of treatment response.

  20. Single-Copy Green Fluorescent Protein Gene Fusions Allow Accurate Measurement of Salmonella Gene Expression In Vitro and during Infection of Mammalian Cells

    OpenAIRE

    Hautefort, Isabelle; Proença, Maria José; Hinton, Jay C. D.

    2003-01-01

    We developed a reliable and flexible green fluorescent protein (GFP)-based system for measuring gene expression in individual bacterial cells. Until now, most systems have relied upon plasmid-borne gfp gene fusions, risking problems associated with plasmid instability. We show that a recently developed GFP variant, GFP+, is suitable for assessing bacterial gene expression. Various gfp+ transcriptional fusions were constructed and integrated as single copies into the chromosome of Salmonella e...

  1. Comprehensive quality control utilizing the prehybridization third-dye image leads to accurate gene expression measurements by cDNA microarrays

    Directory of Open Access Journals (Sweden)

    Jiang Nan

    2006-08-01

    Full Text Available Abstract Background Gene expression profiling using microarrays has become an important genetic tool. Spotted arrays prepared in academic labs have the advantage of low cost and high design and content flexibility, but are often limited by their susceptibility to quality control (QC issues. Previously, we have reported a novel 3-color microarray technology that enabled array fabrication QC. In this report we further investigated its advantage in spot-level data QC. Results We found that inadequate amount of bound probes available for hybridization led to significant, gene-specific compression in ratio measurements, increased data variability, and printing pin dependent heterogeneities. The impact of such problems can be captured through the definition of quality scores, and efficiently controlled through quality-dependent filtering and normalization. We compared gene expression measurements derived using our data processing pipeline with the known input ratios of spiked in control clones, and with the measurements by quantitative real time RT-PCR. In each case, highly linear relationships (R2>0.94 were observed, with modest compression in the microarray measurements (correction factor Conclusion Our microarray analytical and technical advancements enabled a better dissection of the sources of data variability and hence a more efficient QC. With that highly accurate gene expression measurements can be achieved using the cDNA microarray technology.

  2. Accurate Profiling of Gene Expression and Alternative Polyadenylation with Whole Transcriptome Termini Site Sequencing (WTTS-Seq)

    Science.gov (United States)

    Zhou, Xiang; Li, Rui; Michal, Jennifer J.; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R.; Pouchnik, Derek J.; Harland, Richard M.; Jiang, Zhihua

    2016-01-01

    Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5′- and 3′-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915

  3. Accurate Profiling of Gene Expression and Alternative Polyadenylation with Whole Transcriptome Termini Site Sequencing (WTTS-Seq).

    Science.gov (United States)

    Zhou, Xiang; Li, Rui; Michal, Jennifer J; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R; Pouchnik, Derek J; Harland, Richard M; Jiang, Zhihua

    2016-06-01

    Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5'- and 3'-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915

  4. Rapid and accurate identification of Mycobacterium tuberculosis complex and common non-tuberculous mycobacteria by multiplex real-time PCR targeting different housekeeping genes.

    Science.gov (United States)

    Nasr Esfahani, Bahram; Rezaei Yazdi, Hadi; Moghim, Sharareh; Ghasemian Safaei, Hajieh; Zarkesh Esfahani, Hamid

    2012-11-01

    Rapid and accurate identification of mycobacteria isolates from primary culture is important due to timely and appropriate antibiotic therapy. Conventional methods for identification of Mycobacterium species based on biochemical tests needs several weeks and may remain inconclusive. In this study, a novel multiplex real-time PCR was developed for rapid identification of Mycobacterium genus, Mycobacterium tuberculosis complex (MTC) and the most common non-tuberculosis mycobacteria species including M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and the M. gordonae in three reaction tubes but under same PCR condition. Genetic targets for primer designing included the 16S rDNA gene, the dnaJ gene, the gyrB gene and internal transcribed spacer (ITS). Multiplex real-time PCR was setup with reference Mycobacterium strains and was subsequently tested with 66 clinical isolates. Results of multiplex real-time PCR were analyzed with melting curves and melting temperature (T (m)) of Mycobacterium genus, MTC, and each of non-tuberculosis Mycobacterium species were determined. Multiplex real-time PCR results were compared with amplification and sequencing of 16S-23S rDNA ITS for identification of Mycobacterium species. Sensitivity and specificity of designed primers were each 100 % for MTC, M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and M. gordonae. Sensitivity and specificity of designed primer for genus Mycobacterium was 96 and 100 %, respectively. According to the obtained results, we conclude that this multiplex real-time PCR with melting curve analysis and these novel primers can be used for rapid and accurate identification of genus Mycobacterium, MTC, and the most common non-tuberculosis Mycobacterium species.

  5. An SVM-based method for predicting cigarette sales volume%一种基于支持向量机的卷烟销量预测方法

    Institute of Scientific and Technical Information of China (English)

    武牧; 林慧苹; 李素科; 吴明治; 王治国; 吴高峰

    2016-01-01

    为解决现有线性回归方法对市级卷烟销量预测研究效果不佳等问题,基于支持向量机(SVM,Support vector machine)设计并实现了一种市级卷烟销量预测方法.以湖南中烟工业有限责任公司卷烟销量为研究对象,将支持向量机(SVM)方法应用到卷烟销量预测中,提出了基于SVM的卷烟销量预测混合方法(SHPM,SVM-based hybrid prediction method).将SHPM与线性回归方法、ARIMA(Autoregressive integrated moving average)方法、SVM方法进行了市级卷烟销量预测的对比实验,结果表明:将SVM方法应用到卷烟销量预测中是可行的.在市级卷烟销量预测上,SHPM预测结果误差相比SVM方法降低9.58%,比线性回归方法降低11.83%,比ARIMA方法降低45.79%.因此,SHPM是一种有效的市级卷烟销量预测方法.%Not satisfied with the accuracy of cigarette sales volume prediction with linear regression method, an SHPM (SVM-based hybrid prediction method) was proposed based on SVM (Support vector machine) by taking the sales volume of China Tobacco Hunan Industrial Company Limited as objects. Municipal level cigarette sales volume predicted separately by SHPM, linear regression, ARIMA (autoregressive integrated moving average) and SVM were compared and analyzed. The results showed that it was feasible to predict cigarette sales volumes with SVM method. The prediction errors against SVM, linear regression and ARIMA reduced by 9.58%, 11.83% and 45.79%, respectively; SHPM prediction method was more effective.

  6. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening.

    Directory of Open Access Journals (Sweden)

    Aurelien Amiot

    Full Text Available The clinical benefit of guaiac fecal occult blood tests (FOBT is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC.We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007 for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve and Net Reclassification Improvement index.There were 247 individuals (mean age 60.8±12.4 years, 52% of males in the study group, and 90 (36% of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one adjustment, Wif-1 methylation (S or U and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02 and 0.81 [0.74-0.86] (p = 0.49, respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively.The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be used in those individuals reluctant to

  7. The detection of the methylated Wif-1 gene is more accurate than a fecal occult blood test for colorectal cancer screening

    KAUST Repository

    Amiot, Aurelien

    2014-07-15

    Background: The clinical benefit of guaiac fecal occult blood tests (FOBT) is now well established for colorectal cancer screening. Growing evidence has demonstrated that epigenetic modifications and fecal microbiota changes, also known as dysbiosis, are associated with CRC pathogenesis and might be used as surrogate markers of CRC. Patients and Methods: We performed a cross-sectional study that included all consecutive subjects that were referred (from 2003 to 2007) for screening colonoscopies. Prior to colonoscopy, effluents (fresh stools, sera-S and urine-U) were harvested and FOBTs performed. Methylation levels were measured in stools, S and U for 3 genes (Wif1, ALX-4, and Vimentin) selected from a panel of 63 genes; Kras mutations and seven dominant and subdominant bacterial populations in stools were quantified. Calibration was assessed with the Hosmer-Lemeshow chi-square, and discrimination was determined by calculating the C-statistic (Area Under Curve) and Net Reclassification Improvement index. Results: There were 247 individuals (mean age 60.8±12.4 years, 52% of males) in the study group, and 90 (36%) of these individuals were patients with advanced polyps or invasive adenocarcinomas. A multivariate model adjusted for age and FOBT led to a C-statistic of 0.83 [0.77-0.88]. After supplementary sequential (one-by-one) adjustment, Wif-1 methylation (S or U) and fecal microbiota dysbiosis led to increases of the C-statistic to 0.90 [0.84-0.94] (p = 0.02) and 0.81 [0.74-0.86] (p = 0.49), respectively. When adjusted jointly for FOBT and Wif-1 methylation or fecal microbiota dysbiosis, the increase of the C-statistic was even more significant (0.91 and 0.85, p<0.001 and p = 0.10, respectively). Conclusion: The detection of methylated Wif-1 in either S or U has a higher performance accuracy compared to guaiac FOBT for advanced colorectal neoplasia screening. Conversely, fecal microbiota dysbiosis detection was not more accurate. Blood and urine testing could be

  8. 一种基于 QBC 的 SVM 主动学习算法%Active learning algorithm for SVM based on QBC

    Institute of Scientific and Technical Information of China (English)

    徐海龙; 别晓峰; 冯卉; 吴天爱

    2015-01-01

    To the problem that large-scale labeled samples is not easy to acquire and the class-unbalanced dataset in the course of souport vector machine (SVM)training,an active learning algorithm based on query by committee (QBC)for SVM(QBC-ASVM)is proposed,which efficiently combines the improved QBC active learning and the weighted SVM.In this method,QBC active learning is used to select the samples which are the most valuable to the current SVM classifier,and the weighted SVM is used to reduce the impact of the unba-lanced data set on SVMs active learning.The experimental results show that the proposed approach can consid-erably reduce the labeled samples and costs compared with the passive SVM,and at the same time,it can ensure that the accurate classification performance is kept as the passive SVM,and the proposed method improves gen-eralization performance and also expedites the SVM training.%针对支持向量机(souport vector machine,SVM)训练学习过程中样本分布不均衡、难以获得大量带有类标注样本的问题,提出一种基于委员会投票选择(query by committee,QBC)的 SVM 主动学习算法 QBC-AS-VM,将改进的 QBC 主动学习方法与加权 SVM 方法有机地结合应用于 SVM 训练学习中,通过改进的 QBC 主动学习,主动选择那些对当前 SVM 分类器最有价值的样本进行标注,在 SVM 主动学习中应用改进的加权 SVM,减少了样本分布不均衡对 SVM 主动学习性能的影响,实验结果表明在保证不影响分类精度的情况下,所提出的算法需要标记的样本数量大大少于随机采样法需要标记的样本数量,降低了学习的样本标记代价,提高了 SVM 泛化性能而且训练速度同样有所提高。

  9. Validating Internal Control Genes for the Accurate Normalization of qPCR Expression Analysis of the Novel Model Plant Setaria viridis.

    Directory of Open Access Journals (Sweden)

    Julia Lambret-Frotté

    Full Text Available Employing reference genes to normalize the data generated with quantitative PCR (qPCR can increase the accuracy and reliability of this method. Previous results have shown that no single housekeeping gene can be universally applied to all experiments. Thus, the identification of a suitable reference gene represents a critical step of any qPCR analysis. Setaria viridis has recently been proposed as a model system for the study of Panicoid grasses, a crop family of major agronomic importance. Therefore, this paper aims to identify suitable S. viridis reference genes that can enhance the analysis of gene expression in this novel model plant. The first aim of this study was the identification of a suitable RNA extraction method that could retrieve a high quality and yield of RNA. After this, two distinct algorithms were used to assess the gene expression of fifteen different candidate genes in eighteen different samples, which were divided into two major datasets, the developmental and the leaf gradient. The best-ranked pair of reference genes from the developmental dataset included genes that encoded a phosphoglucomutase and a folylpolyglutamate synthase; genes that encoded a cullin and the same phosphoglucomutase as above were the most stable genes in the leaf gradient dataset. Additionally, the expression pattern of two target genes, a SvAP3/PI MADS-box transcription factor and the carbon-fixation enzyme PEPC, were assessed to illustrate the reliability of the chosen reference genes. This study has shown that novel reference genes may perform better than traditional housekeeping genes, a phenomenon which has been previously reported. These results illustrate the importance of carefully validating reference gene candidates for each experimental set before employing them as universal standards. Additionally, the robustness of the expression of the target genes may increase the utility of S. viridis as a model for Panicoid grasses.

  10. The Rice Enhancer of Zeste [E(z)] Genes SDG711 and SDG718 are respectively involved in Long Day and Short Day Signaling to Mediate the Accurate Photoperiod Control of Flowering time

    OpenAIRE

    Xiaoyun eLiu; Chao eZhou; Yu eZhao; Shaoli eZhou; Wentao eWang; Dao-Xiu eZhou

    2014-01-01

    Recent advances in rice flowering studies have shown that the accurate control of flowering by photoperiod is regulated by key mechanisms that involve the regulation of flowering genes including Hd1, Ehd1, Hd3a, and RFT1. The chromatin mechanism involved in the regulation of rice flowering genes is presently not well known. Here we show that the rice E(z) genes SDG711 and SDG718, which encode the Polycomb Repressive Complex2 (PRC2) key subunit that is required for trimethylation of histone H3...

  11. The rice enhancer of zeste [E(z)] genes SDG711 and SDG718 are respectively involved in long day and short day signaling to mediate the accurate photoperiod control of flowering time

    OpenAIRE

    Liu, Xiaoyun; Zhou, Chao; Zhao, Yu; Zhou, Shaoli; Wang, Wentao; Zhou, Dao-Xiu

    2014-01-01

    Recent advances in rice flowering studies have shown that the accurate control of flowering by photoperiod is regulated by key mechanisms that involve the regulation of flowering genes including Heading date1 (Hd1), Early hd1 (Ehd1), Hd3a, and RFT1. The chromatin mechanism involved in the regulation of rice flowering genes is presently not well known. Here we show that the rice enhancer of zeste [E(z)] genes SDG711 and SDG718, which encode the polycomb repressive complex2 (PRC2) key subunit t...

  12. Rapid and accurate determination of tissue optical properties using least-squares support vector machines.

    Science.gov (United States)

    Barman, Ishan; Dingari, Narahara Chari; Rajaram, Narasimhan; Tunnell, James W; Dasari, Ramachandra R; Feld, Michael S

    2011-01-01

    Diffuse reflectance spectroscopy (DRS) has been extensively applied for the characterization of biological tissue, especially for dysplasia and cancer detection, by determination of the tissue optical properties. A major challenge in performing routine clinical diagnosis lies in the extraction of the relevant parameters, especially at high absorption levels typically observed in cancerous tissue. Here, we present a new least-squares support vector machine (LS-SVM) based regression algorithm for rapid and accurate determination of the absorption and scattering properties. Using physical tissue models, we demonstrate that the proposed method can be implemented more than two orders of magnitude faster than the state-of-the-art approaches while providing better prediction accuracy. Our results show that the proposed regression method has great potential for clinical applications including in tissue scanners for cancer margin assessment, where rapid quantification of optical properties is critical to the performance. PMID:21412464

  13. MotorPlex provides accurate variant detection across large muscle genes both in single myopathic patients and in pools of DNA samples

    OpenAIRE

    Savarese, Marco; Di Fruscio, Giuseppina; Mutarelli, Margherita; Torella, Annalaura; Magri, Francesca; Santorelli, Filippo Maria; Comi, Giacomo Pietro; Bruno, Claudio; Nigro, Vincenzo

    2014-01-01

    Mutations in ~100 genes cause muscle diseases with complex and often unexplained genotype/phenotype correlations. Next-generation sequencing studies identify a greater-than-expected number of genetic variations in the human genome. This suggests that existing clinical monogenic testing systematically miss very relevant information. We have created a core panel of genes that cause all known forms of nonsyndromic muscle disorders (MotorPlex). It comprises 93 loci, among which are the largest an...

  14. Immediate-Early Gene Transcriptional Activation in Hippocampus Ca1 and Ca3 Does Not Accurately Reflect Rapid, Pattern Completion-Based Retrieval of Context Memory

    Science.gov (United States)

    Pevzner, Aleksandr; Guzowski, John F.

    2015-01-01

    No studies to date have examined whether immediate-early gene (IEG) activation is driven by context memory recall. To address this question, we utilized the context preexposure facilitation effect (CPFE) paradigm. In CPFE, animals acquire contextual fear conditioning through hippocampus-dependent rapid retrieval of a previously formed contextual…

  15. Genes optimized by evolution for accurate and fast translation encode in Archaea and Bacteria a broad and characteristic spectrum of protein functions

    Directory of Open Access Journals (Sweden)

    Merkl Rainer

    2010-11-01

    Full Text Available Abstract Background In many microbial genomes, a strong preference for a small number of codons can be observed in genes whose products are needed by the cell in large quantities. This codon usage bias (CUB improves translational accuracy and speed and is one of several factors optimizing cell growth. Whereas CUB and the overrepresentation of individual proteins have been studied in detail, it is still unclear which high-level metabolic categories are subject to translational optimization in different habitats. Results In a systematic study of 388 microbial species, we have identified for each genome a specific subset of genes characterized by a marked CUB, which we named the effectome. As expected, gene products related to protein synthesis are abundant in both archaeal and bacterial effectomes. In addition, enzymes contributing to energy production and gene products involved in protein folding and stabilization are overrepresented. The comparison of genomes from eleven habitats shows that the environment has only a minor effect on the composition of the effectomes. As a paradigmatic example, we detailed the effectome content of 37 bacterial genomes that are most likely exposed to strongest selective pressure towards translational optimization. These effectomes accommodate a broad range of protein functions like enzymes related to glycolysis/gluconeogenesis and the TCA cycle, ATP synthases, aminoacyl-tRNA synthetases, chaperones, proteases that degrade misfolded proteins, protectants against oxidative damage, as well as cold shock and outer membrane proteins. Conclusions We made clear that effectomes consist of specific subsets of the proteome being involved in several cellular functions. As expected, some functions are related to cell growth and affect speed and quality of protein synthesis. Additionally, the effectomes contain enzymes of central metabolic pathways and cellular functions sustaining microbial life under stress situations. These

  16. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  17. Accurate Finite Difference Algorithms

    Science.gov (United States)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. 用兴趣点凸包和SVM加权反馈实现图像检索%Image Retrieval by Convex Hulls of Interest Points and SVM-Based Weighted Feedback

    Institute of Scientific and Technical Information of China (English)

    苏小红; 丁进; 马培军

    2009-01-01

    针对采用环状颜色直方图的图像检索方法存在的不足,提出一种基于兴趣点凸包的图像特征提取方法,通过对用小波变换检测出的必趣点递归求出它们的凸包,并将每个凸包上的兴趣点按一定的算法安插在相应的桶内,对每个桶求出颜色直方图,利用桶与桶之间的相似度定义两幅图像的相似度.这种特征提取方法可有效抑制兴趣点集合中出现游离兴趣点的情况,结合基于兴趣点的空间离散度和Gabor小波纹理等特征实现图像检索,可有效提高图像检索精度.最后,提出一种新的相关反馈方法,通过利用支持向量机分类结果设置权值来改进移动查询点相关反馈方法.实际图像数据库上的实验表明,引入这种反馈方法后可将图像检索的查准率提高20%左右,查全率提高10%左右.%To solve the problem of image retrieval method based on annular color histogram, a new image characteristics extraction method based on convex hulls of interest points is presented. Firstly, the interest points on an image are detected by wavelet transform. Then, convex hulls of interest points are calculated recursively and these points are assigned to some buckets by a spe-cific algorithm to form a color histogram for every bucket. The similarity of two images is calcu-lated by the similarity between histograms of two buckets. Combined with spatial distribution feature and Gabor texture feature based on convex hulls of interest points, the system of image retrieval is built. Experiments on image database show that this method works well when isolated points exist in the interest points set and so provide more accurate retrieval performance compa-ring with other retrieval method based on interest points. Further more, a novel relevance feed-back method is presented. It improves the query point movement relevance feedback method by setting weights based on support vector machine cluster results. The experiments show

  19. Accurate molecular classification of cancer using simple rules

    OpenAIRE

    Gotoh Osamu; Wang Xiaosheng

    2009-01-01

    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  20. SVM Based Identification of Psychological Personality Using Handwritten Text

    Directory of Open Access Journals (Sweden)

    Syeda Asra

    2016-04-01

    Full Text Available Identification of Personality is a complex process. To ease this process, a model is developed using cursive handwriting. Area based, width based and height based thresholds are set for only character selection, word selection and line selection. The rest is considered as noise. Followed by feature vector construction. Slope feature using slope calculation, shape features and edge detection done using Sobel filter and direction histogram is considered. Based on the direction of handwriting the analysis was done. Writing which rises to the right shows optimism and cheerfulness. Sagging to the right shows physical or mental weariness. The lines which are straight, reveals over-control to compensate for an inner fear of loss of control.The analysis was done using single line and multiple lines. Simple techniques have provided good results. The results using single line were 95% and multiple lines were 91%.The classification is done using SVM classifier.

  1. SVM-based failure detection of GHT localizations

    Science.gov (United States)

    Blaffert, T.; Lorenz, C.; Nickisch, H.; Peters, J.; Weese, J.

    2016-03-01

    This paper addresses the localization of anatomical structures in medical images by a Generalized Hough Transform (GHT). As localization is often a pre-requisite for subsequent model-based segmentation, it is important to assess whether or not the GHT was able to locate the desired object. The GHT by its construction does not make this distinction. We present an approach to detect incorrect GHT localizations by deriving collective features of contributing GHT model points and by training a Support Vector Machine (SVM) classifier. On a training set of 204 cases, we demonstrate that for the detection of incorrect localizations classification errors of down to 3% are achievable. This is three times less than the observed intrinsic GHT localization error.

  2. A NEW SVM BASED EMOTIONAL CLASSIFICATION OF IMAGE

    Institute of Scientific and Technical Information of China (English)

    Wang Weining; Yu Yinglin; Zhang Jianchao

    2005-01-01

    How high-level emotional representation of art paintings can be inferred from percep tual level features suited for the particular classes (dynamic vs. static classification)is presented. The key points are feature selection and classification. According to the strong relationship between notable lines of image and human sensations, a novel feature vector WLDLV (Weighted Line Direction-Length Vector) is proposed, which includes both orientation and length information of lines in an image. Classification is performed by SVM (Support Vector Machine) and images can be classified into dynamic and static. Experimental results demonstrate the effectiveness and superiority of the algorithm.

  3. Automatic Parameters Selection for SVM Based on PSO

    Institute of Scientific and Technical Information of China (English)

    ZHANG Mingfeng; ZHU Yinghua; ZHENG Xu; LIU Yu

    2007-01-01

    Motivated by the fact that automatic parameters selection for Support Vector Machine (SVM) is an important issue to make SVM practically useful and the common used Leave-One-Out (LOO) method is complex calculation and time consuming,an effective strategy for automatic parameters selection for SVM is proposed by using the Particle Swarm Optimization (PSO) in this paper.Simulation results of practice data model demonstrate the effectiveness and high efficiency of the proposed approach.

  4. Oil spill detection from SAR image using SVM based classification

    Directory of Open Access Journals (Sweden)

    A. A. Matkan

    2013-09-01

    Full Text Available In this paper, the potential of fully polarimetric L-band SAR data for detecting sea oil spills is investigated using polarimetric decompositions and texture analysis based on SVM classifier. First, power and magnitude measurements of HH and VV polarization modes and, Pauli, Freeman and Krogager decompositions are computed and applied in SVM classifier. Texture analysis is used for identification using SVM method. The texture features i.e. Mean, Variance, Contrast and Dissimilarity from them are then extracted. Experiments are conducted on full polarimetric SAR data acquired from PALSAR sensor of ALOS satellite on August 25, 2006. An accuracy assessment indicated overall accuracy of 78.92% and 96.46% for the power measurement of the VV polarization and the Krogager decomposition respectively in first step. But by use of texture analysis the results are improved to 96.44% and 96.65% quality for mean of power and magnitude measurements of HH and VV polarizations and the Krogager decomposition. Results show that the Krogager polarimetric decomposition method has the satisfying result for detection of sea oil spill on the sea surface and the texture analysis presents the good results.

  5. Incremental Training for SVM-Based Classification with Keyword Adjusting

    Institute of Scientific and Technical Information of China (English)

    SUN Jin-wen; YANG Jian-wu; LU Bin; XIAO Jian-guo

    2004-01-01

    This paper analyzed the theory of incremental learning of SVM (support vector machine) and pointed out it is a shortage that the support vector optimization is only considered in present research of SVM incremental learning.According to the significance of keyword in training, a new incremental training method considering keyword adjusting was proposed, which eliminates the difference between incremental learning and batch learning through the keyword adjusting.The experimental results show that the improved method outperforms the method without the keyword adjusting and achieve the same precision as the batch method.

  6. SVM-based prediction of caspase substrate cleavage sites

    OpenAIRE

    Wee, Lawrence JK; Tan, Tin Wee; Ranganathan, Shoba

    2006-01-01

    Background Caspases belong to a class of cysteine proteases which function as critical effectors in apoptosis and inflammation by cleaving substrates immediately after unique sites. Prediction of such cleavage sites will complement structural and functional studies on substrates cleavage as well as discovery of new substrates. Recently, different computational methods have been developed to predict the cleavage sites of caspase substrates with varying degrees of success. As the support vector...

  7. Gene

    Data.gov (United States)

    U.S. Department of Health & Human Services — Gene integrates information from a wide range of species. A record may include nomenclature, Reference Sequences (RefSeqs), maps, pathways, variations, phenotypes,...

  8. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  9. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...

  10. Profitable capitation requires accurate costing.

    Science.gov (United States)

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  11. Circadian transitions in radiation dose-dependent augmentation of mRNA levels for DNA damage-induced genes elicited by accurate real-time RT-PCR quantification

    International Nuclear Information System (INIS)

    Molecular mechanisms of intracellular response after DNA-damage by exposure to ionizing radiation have been studied. In the case of cells isolated from living body of human and experimental animals, alteration of the responsiveness by physiological oscillation such as circadian rhythm must be considered. To examine the circadian variation in the response of p53-responsible genes p21, mdm2, bax, and puma, we established a method to quantitate their mRNA levels with high reproducibility and accuracy based on real-time reverse transcription polymerase chain reaction (RT-PCR) and compared the levels of responsiveness in mouse hemocytes after diurnal irradiation to that after nocturnal irradiation. Augmentations of p21 and mdm2 mRNA levels with growth-arrest and of puma mRNA before apoptosis were confirmed by time-course experiment in RAW264.7, and dose-dependent increases in the peak levels of all the RNA were shown. Similarly, the relative RNA levels of p21, mdm2, bax, and puma per glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also increased dose-dependently in peripheral blood and bone marrow cells isolated from whole-body-irradiated mice. Induction levels of all messages reduced by half after nighttime irradiation as compared with daytime irradiation in blood cells. In marrow cells, nighttime irradiation enhanced the p21 and mdm2 mRNA levels than daytime irradiation. No significant difference in bax or puma mRNA levels was observed between nighttime and daytime irradiation in marrow cells. This suggests that early-stage cellular responsiveness in DNA damage-induced genes is modulated between diurnal and nocturnal irradiation. (author)

  12. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  13. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  14. Application of multilocus sequence analysis (MLSA) for accurate identification of Legionella spp. Isolated from municipal fountains in Chengdu, China, based on 16S rRNA, mip, and rpoB genes.

    Science.gov (United States)

    Guan, Wang; Xu, Ying; Chen, Da-Li; Xu, Jia-Nan; Tian, Yu; Chen, Jian-Ping

    2012-02-01

    Legionellosis (Legionnaires' disease; LD) is a form of severe pneumonia caused by species of Legionella bacteria. Because inhalation of Legionella-contaminated aerosol is considered the major infection route, routine assessments of potential infection sources such as hot water systems, air-conditioner cooling water, and municipal fountains are of great importance. In this study, we utilized in vitro culture and multilocus sequence analysis (MLSA) targeting 16S rRNA, mip, rpoB, and mip-rpoB concatenation to isolate and identify Legionella spp. from 5 municipal fountains in Chengdu City, Sichuan Province, China. Our results demonstrated that 16S rRNA was useful for initial identification, as it could recognize isolates robustly at the genus level, while the genes mip, rpoB, and mip-rpoB concatenation could confidently discriminate Legionella species. Notably, the three subspecies of L. pneumophila could be distinguished by the analysis based on rpoB. The serotyping result of strain CD-1 was consistent with genetic analysis based on the concatenation of mip and rpoB. Despite regular maintenance and sanitizing methods, 4 of the 5 municipal fountains investigated in this study were positive for Legionella contamination. Thus, regularly scheduled monitoring of municipal fountains is urgently needed as well as vigilant disinfection. Although the application of MLSA for inspection of potential sites of infection in public areas is not standard procedure, further investigations may prove its usefulness. PMID:22367947

  15. Parameters for accurate genome alignment

    Directory of Open Access Journals (Sweden)

    Hamada Michiaki

    2010-02-01

    Full Text Available Abstract Background Genome sequence alignments form the basis of much research. Genome alignment depends on various mundane but critical choices, such as how to mask repeats and which score parameters to use. Surprisingly, there has been no large-scale assessment of these choices using real genomic data. Moreover, rigorous procedures to control the rate of spurious alignment have not been employed. Results We have assessed 495 combinations of score parameters for alignment of animal, plant, and fungal genomes. As our gold-standard of accuracy, we used genome alignments implied by multiple alignments of proteins and of structural RNAs. We found the HOXD scoring schemes underlying alignments in the UCSC genome database to be far from optimal, and suggest better parameters. Higher values of the X-drop parameter are not always better. E-values accurately indicate the rate of spurious alignment, but only if tandem repeats are masked in a non-standard way. Finally, we show that γ-centroid (probabilistic alignment can find highly reliable subsets of aligned bases. Conclusions These results enable more accurate genome alignment, with reliability measures for local alignments and for individual aligned bases. This study was made possible by our new software, LAST, which can align vertebrate genomes in a few hours http://last.cbrc.jp/.

  16. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  17. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  18. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  19. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  20. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  1. Clinical application and comparison of rapid and accurate identification of mycobacterium by gene chip microarray and smear acid-fast staining%基因芯片快速分枝杆菌鉴定技术与涂片抗酸染色技术的临床应用比较∗

    Institute of Scientific and Technical Information of China (English)

    侯沪; 李爱敏; 唐曙明

    2015-01-01

    目的:比较基因芯片快速鉴定分枝杆菌与经典涂片抗酸染色镜检技术的临床应用效果,评估两种方法的优劣性。方法2011~2014年间以基因芯片与涂片抗酸染色两种技术对深圳市所有综合性医院临床怀疑有分枝杆菌感染的标本进行检验。χ%Objective To compare the clinical effect of application of gene chip microscopy technique for rapid identification of Mycobacterium and classic smear acid-fast staining,and to assess the advantages and disadvantages of the wo methods.Methods From 201 1 to 2014,gene chip microarray and smear acid fast staining were used to identify the mycobacterium tuberculosis in speci-mens suspicious of the infection from all the general hospitals of Shenzhen city.Chi-square test was used to compare the positive rates of the two methods.Results A total of 2 481 specimens were collected from clinic.With smear acid-fast staining technique, the positive specimens of 1 93 cases werefound and the positive rate was 7.8%.Meanwhile,31 7 positive samples were detected by the technology of gene chip microarray,and the positive rate was 12.8%.The positive rate of Gene chip microarray technology was higher than that of the smear acid fast staining,and there was significant difference between them (P < 0.05 ).The 31 7 positive samples identified by Gene chip microarray,included 263 cases of Mycobacterium tuberculosis,27 cases of Mycobacterium absces-sus,18 cases of Mycobacterium intracellulare,3 cases of Mycobacterium gastric uLcer,3 cases of Mycobacterium avium,1 case of Mycobacterium Gordonae,1 case of Mycobacterium marinum and 1 case of Mycobacterium Kansas.Conclusion The gene chip mi-croarray technology is fast,accurate,and its positive rate is higher than that of smear acid-fast staining technique.Classification and identification of Mycobacterium is very helpful for clinical individualized treatment of anti mycobacterium infection.

  2. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  4. Invariant Image Watermarking Using Accurate Zernike Moments

    Directory of Open Access Journals (Sweden)

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  5. Identification of Gene-Expression Signatures and Protein Markers for Breast Cancer Grading and Staging.

    Directory of Open Access Journals (Sweden)

    Fang Yao

    Full Text Available The grade of a cancer is a measure of the cancer's malignancy level, and the stage of a cancer refers to the size and the extent that the cancer has spread. Here we present a computational method for prediction of gene signatures and blood/urine protein markers for breast cancer grades and stages based on RNA-seq data, which are retrieved from the TCGA breast cancer dataset and cover 111 pairs of disease and matching adjacent noncancerous tissues with pathologists-assigned stages and grades. By applying a differential expression and an SVM-based classification approach, we found that 324 and 227 genes in cancer have their expression levels consistently up-regulated vs. their matching controls in a grade- and stage-dependent manner, respectively. By using these genes, we predicted a 9-gene panel as a gene signature for distinguishing poorly differentiated from moderately and well differentiated breast cancers, and a 19-gene panel as a gene signature for discriminating between the moderately and well differentiated breast cancers. Similarly, a 30-gene panel and a 21-gene panel are predicted as gene signatures for distinguishing advanced stage (stages III-IV from early stage (stages I-II cancer samples and for distinguishing stage II from stage I samples, respectively. We expect these gene panels can be used as gene-expression signatures for cancer grade and stage classification. In addition, of the 324 grade-dependent genes, 188 and 66 encode proteins that are predicted to be blood-secretory and urine-excretory, respectively; and of the 227 stage-dependent genes, 123 and 51 encode proteins predicted to be blood-secretory and urine-excretory, respectively. We anticipate that some combinations of these blood and urine proteins could serve as markers for monitoring breast cancer at specific grades and stages through blood and urine tests.

  6. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  7. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    Science.gov (United States)

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  8. Accurate atomic data for industrial plasma applications

    Energy Technology Data Exchange (ETDEWEB)

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  9. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Science.gov (United States)

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  10. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup......The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...

  11. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  12. Accurate Finite Difference Methods for Option Pricing

    OpenAIRE

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  13. Accurate variational forms for multiskyrmion configurations

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  14. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  15. Accurate phase-shift velocimetry in rock

    Science.gov (United States)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  16. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  17. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Science.gov (United States)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  18. A gene expression biomarker accurately predicts estrogen receptor α modulation in a human gene expression compendium

    Science.gov (United States)

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...

  19. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  20. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  1. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  2. Investigations on Accurate Analysis of Microstrip Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  3. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  4. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  5. Fast and Accurate Construction of Confidence Intervals for Heritability.

    Science.gov (United States)

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  6. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  7. Accurate radiative transfer calculations for layered media.

    Science.gov (United States)

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  8. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  9. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  10. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  11. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  12. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  13. Microarray results: how accurate are they?

    Directory of Open Access Journals (Sweden)

    Mane Shrikant

    2002-08-01

    Full Text Available Abstract Background DNA microarray technology is a powerful technique that was recently developed in order to analyze thousands of genes in a short time. Presently, microarrays, or chips, of the cDNA type and oligonucleotide type are available from several sources. The number of publications in this area is increasing exponentially. Results In this study, microarray data obtained from two different commercially available systems were critically evaluated. Our analysis revealed several inconsistencies in the data obtained from the two different microarrays. Problems encountered included inconsistent sequence fidelity of the spotted microarrays, variability of differential expression, low specificity of cDNA microarray probes, discrepancy in fold-change calculation and lack of probe specificity for different isoforms of a gene. Conclusions In view of these pitfalls, data from microarray analysis need to be interpreted cautiously.

  14. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  15. Toward Accurate and Quantitative Comparative Metagenomics.

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  16. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  17. How accurate are SuperCOSMOS positions?

    CERN Document Server

    Schaefer, Adam; Johnston, Helen

    2014-01-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than $B_J=20$ and Galactic latitudes $|b|>10^{\\circ}$. Position differences showed an rms scatter of $0.16''$ in right ascension and declination. While overall systematic offsets were $<0.1''$ in each hemisphere, both the systematics and scatter were greater in the north.

  18. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  19. Accurate Telescope Mount Positioning with MEMS Accelerometers

    CERN Document Server

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  20. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  1. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  2. Genes and Gene Therapy

    Science.gov (United States)

    ... correctly, a child can have a genetic disorder. Gene therapy is an experimental technique that uses genes to ... or prevent disease. The most common form of gene therapy involves inserting a normal gene to replace an ...

  3. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  4. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  5. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  6. SVM-Based CAC System for B-Mode Kidney Ultrasound Images.

    Science.gov (United States)

    Subramanya, M B; Kumar, Vinod; Mukherjee, Shaktidev; Saini, Manju

    2015-08-01

    The present study proposes a computer-aided classification (CAC) system for three kidney classes, viz. normal, medical renal disease (MRD) and cyst using B-mode ultrasound images. Thirty-five B-mode kidney ultrasound images consisting of 11 normal images, 8 MRD images and 16 cyst images have been used. Regions of interest (ROIs) have been marked by the radiologist from the parenchyma region of the kidney in case of normal and MRD cases and from regions inside lesions for cyst cases. To evaluate the contribution of texture features extracted from de-speckled images for the classification task, original images have been pre-processed by eight de-speckling methods. Six categories of texture features are extracted. One-against-one multi-class support vector machine (SVM) classifier has been used for the present work. Based on overall classification accuracy (OCA), features from ROIs of original images are concatenated with the features from ROIs of pre-processed images. On the basis of OCA, few feature sets are considered for feature selection. Differential evolution feature selection (DEFS) has been used to select optimal features for the classification task. DEFS process is repeated 30 times to obtain 30 subsets. Run-length matrix features from ROIs of images pre-processed by Lee's sigma concatenated with that of enhanced Lee method have resulted in an average accuracy (in %) and standard deviation of 86.3 ± 1.6. The results obtained in the study indicate that the performance of the proposed CAC system is promising, and it can be used by the radiologists in routine clinical practice for the classification of renal diseases. PMID:25537457

  7. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Directory of Open Access Journals (Sweden)

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  8. SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis

    CERN Document Server

    Kisku, Dakshina Ranjan; Sing, Jamuna Kanta; Gupta, Phalguni

    2010-01-01

    Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set o...

  9. SVM-based feature extraction and classification of aflatoxin contaminated corn using fluorescence hyperspectral data

    Science.gov (United States)

    Support Vector Machine (SVM) was used in the Genetic Algorithms (GA) process to select and classify a subset of hyperspectral image bands. The method was applied to fluorescence hyperspectral data for the detection of aflatoxin contamination in Aspergillus flavus infected single corn kernels. In the...

  10. Using LS-SVM Based Motion Recognition for Smartphone Indoor Wireless Positioning

    Directory of Open Access Journals (Sweden)

    Ruizhi Chen

    2012-05-01

    Full Text Available The paper presents an indoor navigation solution by combining physical motion recognition with wireless positioning. Twenty-seven simple features are extracted from the built-in accelerometers and magnetometers in a smartphone. Eight common motion states used during indoor navigation are detected by a Least Square-Support Vector Machines (LS-SVM classification algorithm, e.g., static, standing with hand swinging, normal walking while holding the phone in hand, normal walking with hand swinging, fast walking, U-turning, going up stairs, and going down stairs. The results indicate that the motion states are recognized with an accuracy of up to 95.53% for the test cases employed in this study. A motion recognition assisted wireless positioning approach is applied to determine the position of a mobile user. Field tests show a 1.22 m mean error in “Static Tests” and a 3.53 m in “Stop-Go Tests”.

  11. An SVM-based solution for fault detection in wind turbines.

    Science.gov (United States)

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  12. SVM-based classification of LV wall motion in cardiac MRI with the assessment of STE

    Science.gov (United States)

    Mantilla, Juan; Garreau, Mireille; Bellanger, Jean-Jacques; Paredes, José Luis

    2015-01-01

    In this paper, we propose an automated method to classify normal/abnormal wall motion in Left Ventricle (LV) function in cardiac cine-Magnetic Resonance Imaging (MRI), taking as reference, strain information obtained from 2D Speckle Tracking Echocardiography (STE). Without the need of pre-processing and by exploiting all the images acquired during a cardiac cycle, spatio-temporal profiles are extracted from a subset of radial lines from the ventricle centroid to points outside the epicardial border. Classical Support Vector Machines (SVM) are used to classify features extracted from gray levels of the spatio-temporal profile as well as their representations in the Wavelet domain under the assumption that the data may be sparse in that domain. Based on information obtained from radial strain curves in 2D-STE studies, we label all the spatio-temporal profiles that belong to a particular segment as normal if the peak systolic radial strain curve of this segment presents normal kinesis, or abnormal if the peak systolic radial strain curve presents hypokinesis or akinesis. For this study, short-axis cine- MR images are collected from 9 patients with cardiac dyssynchrony for which we have the radial strain tracings at the mid-papilary muscle obtained by 2D STE; and from one control group formed by 9 healthy subjects. The best classification performance is obtained with the gray level information of the spatio-temporal profiles using a RBF kernel with 91.88% of accuracy, 92.75% of sensitivity and 91.52% of specificity.

  13. Diagnosis of Elevator Faults with LS-SVM Based on Optimization by K-CV

    OpenAIRE

    Zhou Wan; Shilin Yi; Kun Li; Ran Tao; Min Gou; Xinshi Li; Shu Guo

    2015-01-01

    Several common elevator malfunctions were diagnosed with a least square support vector machine (LS-SVM). After acquiring vibration signals of various elevator functions, their energy characteristics and time domain indicators were extracted by theoretically analyzing the optimal wavelet packet, in order to construct a feature vector of malfunctions for identifying causes of the malfunctions as input of LS-SVM. Meanwhile, parameters about LS-SVM were optimized by K-fold cross validation (K-CV)...

  14. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    Directory of Open Access Journals (Sweden)

    Daleesha M Viswanathan

    2015-02-01

    Full Text Available In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL on facial expression recognition is scanty. Contrary to American Sign Language (ASL, head movements are less conspicuous in ISL and the answers to questions such as yes or no are signed by hand. Purpose of this paper is to present our work in recognizing facial expression changes in isolated ISL sentences. Facial gesture pattern results in the change of skin textures by forming wrinkles and furrows. Gabor wavelet method is well-known for capturing subtle textural changes on surfaces. Therefore, a unique approach was developed to model facial expression changes with Gabor wavelet parameters that were chosen from partitioned face areas. These parameters were incorporated with Euclidian distance measure. Multi class SVM classifier was used in this recognition system to identify facial expressions in an isolated facial expression sequences in ISL. An accuracy of 92.12 % was achieved by our proposed system.

  15. Linear SVM-Based Android Malware Detection for Reliable IoT Services

    OpenAIRE

    Hyo-Sik Ham; Hwan-Hee Kim; Myung-Sup Kim; Mi-Jung Choi

    2014-01-01

    Current many Internet of Things (IoT) services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers a...

  16. Using LS-SVM based motion recognition for smartphone indoor wireless positioning.

    Science.gov (United States)

    Pei, Ling; Liu, Jingbin; Guinness, Robert; Chen, Yuwei; Kuusniemi, Heidi; Chen, Ruizhi

    2012-01-01

    The paper presents an indoor navigation solution by combining physical motion recognition with wireless positioning. Twenty-seven simple features are extracted from the built-in accelerometers and magnetometers in a smartphone. Eight common motion states used during indoor navigation are detected by a Least Square-Support Vector Machines (LS-SVM) classification algorithm, e.g., static, standing with hand swinging, normal walking while holding the phone in hand, normal walking with hand swinging, fast walking, U-turning, going up stairs, and going down stairs. The results indicate that the motion states are recognized with an accuracy of up to 95.53% for the test cases employed in this study. A motion recognition assisted wireless positioning approach is applied to determine the position of a mobile user. Field tests show a 1.22 m mean error in "Static Tests" and a 3.53 m in "Stop-Go Tests".

  17. Performance Analysis of a DTC and SVM Based Field-Orientation Control Induction Motor Drive

    OpenAIRE

    Md. Rashedul Islam; Pintu Kumar Sadhu; Md. Maruful Islam; Md. Kamal Hossain

    2015-01-01

    This study presents a performance analysis of two most popular control strategies for Induction Motor (IM) drives: direct torque control (DTC) and space vector modulation (SVM) strategies. The performance analysis is done by applying field-orientation control (FOC) technique because of its good dynamic response. The theoretical principle, simulation results are discussed to study the dynamic performances of the drive system for individual control strategies using actual parameters of inductio...

  18. Performance Analysis of a DTC and SVM Based Field-Orientation Control Induction Motor Drive

    Directory of Open Access Journals (Sweden)

    Md. Rashedul Islam

    2015-02-01

    Full Text Available This study presents a performance analysis of two most popular control strategies for Induction Motor (IM drives: direct torque control (DTC and space vector modulation (SVM strategies. The performance analysis is done by applying field-orientation control (FOC technique because of its good dynamic response. The theoretical principle, simulation results are discussed to study the dynamic performances of the drive system for individual control strategies using actual parameters of induction motor. A closed loop PI controller scheme has been used. The main purpose of this study is to minimize ripple in torque response curve and to achieve quick speed response as well as to investigate the condition for optimum performance of induction motor drive. Depending on the simulation results this study also presents a detailed comparison between direct torque control and space vector modulation based field-orientation control method for the induction motor drive.

  19. SVM-based spectrum mobility prediction scheme in mobile cognitive radio networks.

    Science.gov (United States)

    Wang, Yao; Zhang, Zhongzhao; Ma, Lin; Chen, Jiamei

    2014-01-01

    Spectrum mobility as an essential issue has not been fully investigated in mobile cognitive radio networks (CRNs). In this paper, a novel support vector machine based spectrum mobility prediction (SVM-SMP) scheme is presented considering time-varying and space-varying characteristics simultaneously in mobile CRNs. The mobility of cognitive users (CUs) and the working activities of primary users (PUs) are analyzed in theory. And a joint feature vector extraction (JFVE) method is proposed based on the theoretical analysis. Then spectrum mobility prediction is executed through the classification of SVM with a fast convergence speed. Numerical results validate that SVM-SMP gains better short-time prediction accuracy rate and miss prediction rate performance than the two algorithms just depending on the location and speed information. Additionally, a rational parameter design can remedy the prediction performance degradation caused by high speed SUs with strong randomness movements. PMID:25143975

  20. Linear SVM-Based Android Malware Detection for Reliable IoT Services

    Directory of Open Access Journals (Sweden)

    Hyo-Sik Ham

    2014-01-01

    Full Text Available Current many Internet of Things (IoT services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers are expanding the scope of their attacks beyond the existing PC and Internet environment into mobile devices. In this paper, we apply a linear support vector machine (SVM to detect Android malware and compare the malware detection performance of SVM with that of other machine learning classifiers. Through experimental validation, we show that the SVM outperforms other machine learning classifiers.

  1. SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis

    OpenAIRE

    Kisku, Dakshina Ranjan; Mehrotra, Hunny; Sing, Jamuna Kanta; Gupta, Phalguni

    2010-01-01

    Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orienta...

  2. Restoring the Generalizability of SVM Based Decoding in High Dimensional Neuroimage Data

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    for Support Vector Machines. However, good generalization may be recovered in part by a simple renormalization procedure. We show that with proper renormalization, cross-validation based parameter optimization leads to the acceptance of more non-linearity in neuroimage classifiers than would have been...

  3. SVM-Based Spectrum Mobility Prediction Scheme in Mobile Cognitive Radio Networks

    OpenAIRE

    Yao Wang; Zhongzhao Zhang; Lin Ma; Jiamei Chen

    2014-01-01

    Spectrum mobility as an essential issue has not been fully investigated in mobile cognitive radio networks (CRNs). In this paper, a novel support vector machine based spectrum mobility prediction (SVM-SMP) scheme is presented considering time-varying and space-varying characteristics simultaneously in mobile CRNs. The mobility of cognitive users (CUs) and the working activities of primary users (PUs) are analyzed in theory. And a joint feature vector extraction (JFVE) method is proposed based...

  4. A SVM-based method for sentiment analysis in Persian language

    Science.gov (United States)

    Hajmohammadi, Mohammad Sadegh; Ibrahim, Roliana

    2013-03-01

    Persian language is the official language of Iran, Tajikistan and Afghanistan. Local online users often represent their opinions and experiences on the web with written Persian. Although the information in those reviews is valuable to potential consumers and sellers, the huge amount of web reviews make it difficult to give an unbiased evaluation to a product. In this paper, standard machine learning techniques SVM and naive Bayes are incorporated into the domain of online Persian Movie reviews to automatically classify user reviews as positive or negative and performance of these two classifiers is compared with each other in this language. The effects of feature presentations on classification performance are discussed. We find that accuracy is influenced by interaction between the classification models and the feature options. The SVM classifier achieves as well as or better accuracy than naive Bayes in Persian movie. Unigrams are proved better features than bigrams and trigrams in capturing Persian sentiment orientation.

  5. Component Content Soft-Sensor of SVM Based on Ions Color Characteristics

    Directory of Open Access Journals (Sweden)

    Zhang Kunpeng

    2012-10-01

    Full Text Available In consideration of different characteristic colors of Ions in the P507-HCL Pr/Nd extraction separation system, ions color image feature H, S, I that closely related to the element component contents are extracted by using image processing method. Principal Component Analysis algorithm is employed to determine statistics mean of H, S, I which has the stronger correlation with element component content and the auxiliary variables are obtained. With the algorithm of support vector machine, a component contents soft-sensor model in Pr/Nd extraction process is established. Finally, simulations and tests verify the rationality and feasibility of the proposed method. The research results provide theoretical foundation for the online measurement of the component content in Pr/Nd countercurrent extraction separation process.

  6. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    OpenAIRE

    Daleesha M Viswanathan; Sumam Mary Idicula

    2015-01-01

    In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL) on facial expression recognition is scanty. Contrary to American Sign Language (ASL), head movements are less conspicuous in ISL and the answers to questions...

  7. Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin, Soft-Margin

    OpenAIRE

    Aksu, Yaman

    2012-01-01

    Margin maximization in the hard-margin sense, proposed as feature elimination criterion by the MFE-LO method, is combined here with data radius utilization to further aim to lower generalization error, as several published bounds and bound-related formulations pertaining to lowering misclassification risk (or error) pertain to radius e.g. product of squared radius and weight vector squared norm. Additionally, we propose additional novel feature elimination criteria that, while instead being i...

  8. SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy

    Science.gov (United States)

    Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui

    2014-01-01

    Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063

  9. EHPred: an SVM-based method for epoxide hydrolases recognition and classification

    Institute of Scientific and Technical Information of China (English)

    JIA Jia; YANG Liang; ZHANG Zi-zhang

    2006-01-01

    A two-layer method based on support vector machines (SVMs) has been developed to distinguish epoxide hydrolases (EHs) from other enzymes and to classify its subfamilies using its primary protein sequences. SVM classifiers were built using three different feature vectors extracted from the primary sequence of EHs: the amino acid composition (AAC), the dipeptide composition (DPC), and the pseudo-amino acid composition (PAAC). Validated by 5-fold cross tests, the first layer SVM classifier can differentiate EHs and non-EHs with an accuracy of 94.2% and has a Matthew,s correlation coefficient (MCC) of 0.84.Using 2-fold cross validation, PAAC-based second layer SVM can further classify EH subfamilies with an overall accuracy of 90.7% and MCC of 0.87 as compared to AAC (80.0%) and DPC (84.9%). A program called EHPred has also been developed to assist readers to recognize EHs and to classify their subfamilies using primary protein sequences with greater accuracy.

  10. Deriving statistical significance maps for SVM based image classification and group comparisons.

    Science.gov (United States)

    Gaonkar, Bilwaj; Davatzikos, Christos

    2012-01-01

    Population based pattern analysis and classification for quantifying structural and functional differences between diverse groups has been shown to be a powerful tool for the study of a number of diseases, and is quite commonly used especially in neuroimaging. The alternative to these pattern analysis methods, namely mass univariate methods such as voxel based analysis and all related methods, cannot detect multivariate patterns associated with group differences, and are not particularly suitable for developing individual-based diagnostic and prognostic biomarkers. A commonly used pattern analysis tool is the support vector machine (SVM). Unlike univariate statistical frameworks for morphometry, analytical tools for statistical inference are unavailable for the SVM. In this paper, we show that null distributions ordinarily obtained by permutation tests using SVMs can be analytically approximated from the data. The analytical computation takes a small fraction of the time it takes to do an actual permutation test, thereby rendering it possible to quickly create statistical significance maps derived from SVMs. Such maps are critical for understanding imaging patterns of group differences and interpreting which anatomical regions are important in determining the classifier's decision.

  11. [Application of optimized parameters SVM based on photoacoustic spectroscopy method in fault diagnosis of power transformer].

    Science.gov (United States)

    Zhang, Yu-xin; Cheng, Zhi-feng; Xu, Zheng-ping; Bai, Jing

    2015-01-01

    In order to solve the problems such as complex operation, consumption for the carrier gas and long test period in traditional power transformer fault diagnosis approach based on dissolved gas analysis (DGA), this paper proposes a new method which is detecting 5 types of characteristic gas content in transformer oil such as CH4, C2H2, C2H4, C2H6 and H2 based on photoacoustic Spectroscopy and C2H2/C2H4, CH4/H2, C2H4/C2H6 three-ratios data are calculated. The support vector machine model was constructed using cross validation method under five support vector machine functions and four kernel functions, heuristic algorithms were used in parameter optimization for penalty factor c and g, which to establish the best SVM model for the highest fault diagnosis accuracy and the fast computing speed. Particles swarm optimization and genetic algorithm two types of heuristic algorithms were comparative studied in this paper for accuracy and speed in optimization. The simulation result shows that SVM model composed of C-SVC, RBF kernel functions and genetic algorithm obtain 97. 5% accuracy in test sample set and 98. 333 3% accuracy in train sample set, and genetic algorithm was about two times faster than particles swarm optimization in computing speed. The methods described in this paper has many advantages such as simple operation, non-contact measurement, no consumption for the carrier gas, long test period, high stability and sensitivity, the result shows that the methods described in this paper can instead of the traditional transformer fault diagnosis by gas chromatography and meets the actual project needs in transformer fault diagnosis.

  12. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  13. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  14. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  15. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  16. Many accurate small-discriminatory feature subsets exist in microarray transcript data: biomarker discovery

    Directory of Open Access Journals (Sweden)

    Grate Leslie R

    2005-04-01

    Full Text Available Abstract Background Molecular profiling generates abundance measurements for thousands of gene transcripts in biological samples such as normal and tumor tissues (data points. Given such two-class high-dimensional data, many methods have been proposed for classifying data points into one of the two classes. However, finding very small sets of features able to correctly classify the data is problematic as the fundamental mathematical proposition is hard. Existing methods can find "small" feature sets, but give no hint how close this is to the true minimum size. Without fundamental mathematical advances, finding true minimum-size sets will remain elusive, and more importantly for the microarray community there will be no methods for finding them. Results We use the brute force approach of exhaustive search through all genes, gene pairs (and for some data sets gene triples. Each unique gene combination is analyzed with a few-parameter linear-hyperplane classification method looking for those combinations that form training error-free classifiers. All 10 published data sets studied are found to contain predictive small feature sets. Four contain thousands of gene pairs and 6 have single genes that perfectly discriminate. Conclusion This technique discovered small sets of genes (3 or less in published data that form accurate classifiers, yet were not reported in the prior publications. This could be a common characteristic of microarray data, thus making looking for them worth the computational cost. Such small gene sets could indicate biomarkers and portend simple medical diagnostic tests. We recommend checking for small gene sets routinely. We find 4 gene pairs and many gene triples in the large hepatocellular carcinoma (HCC, Liver cancer data set of Chen et al. The key component of these is the "placental gene of unknown function", PLAC8. Our HMM modeling indicates PLAC8 might have a domain like part of lP59's crystal structure (a Non

  17. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  18. Evidence Based Selection of Housekeeping Genes

    OpenAIRE

    de Jonge, Hendrik J.M.; Fehrmann, Rudolf S. N.; Eveline S. J. M. de Bont; Hofstra, Robert M. W.; Gerbens, Frans; Kamps, Willem A.; Vries, Elisabeth G. E.; van der Zee, Ate G.J.; te Meerman, Gerard J.; ter Elst, Arja

    2007-01-01

    For accurate and reliable gene expression analysis, normalization of gene expression data against housekeeping genes (reference or internal control genes) is required. It is known that commonly used housekeeping genes (e.g. ACTB, GAPDH, HPRT1, and B2M) vary considerably under different experimental conditions and therefore their use for normalization is limited. We performed a meta-analysis of 13,629 human gene array samples in order to identify the most stable expressed genes. Here we show n...

  19. Accurate Jones Matrix of the Practical Faraday Rotator

    Institute of Scientific and Technical Information of China (English)

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  20. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  1. 78 FR 34604 - Submitting Complete and Accurate Information

    Science.gov (United States)

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  2. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    CERN Document Server

    Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang

    2012-01-01

    In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...

  3. Accurate encoding and decoding by single cells: amplitude versus frequency modulation.

    Directory of Open Access Journals (Sweden)

    Gabriele Micali

    2015-06-01

    Full Text Available Cells sense external concentrations and, via biochemical signaling, respond by regulating the expression of target proteins. Both in signaling networks and gene regulation there are two main mechanisms by which the concentration can be encoded internally: amplitude modulation (AM, where the absolute concentration of an internal signaling molecule encodes the stimulus, and frequency modulation (FM, where the period between successive bursts represents the stimulus. Although both mechanisms have been observed in biological systems, the question of when it is beneficial for cells to use either AM or FM is largely unanswered. Here, we first consider a simple model for a single receptor (or ion channel, which can either signal continuously whenever a ligand is bound, or produce a burst in signaling molecule upon receptor binding. We find that bursty signaling is more accurate than continuous signaling only for sufficiently fast dynamics. This suggests that modulation based on bursts may be more common in signaling networks than in gene regulation. We then extend our model to multiple receptors, where continuous and bursty signaling are equivalent to AM and FM respectively, finding that AM is always more accurate. This implies that the reason some cells use FM is related to factors other than accuracy, such as the ability to coordinate expression of multiple genes or to implement threshold crossing mechanisms.

  4. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Energy Technology Data Exchange (ETDEWEB)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  5. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    Science.gov (United States)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  6. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till;

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  7. A Fast and Accurate Universal Kepler Solver with Stumpff Series

    CERN Document Server

    Wisdom, Jack

    2015-01-01

    We derive and present a fast and accurate solution of the initial value problem for Keplerian motion in universal variables that does not use the Stumpff series. We find that it performs better than methods based on the Stumpff series.

  8. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  9. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  10. Accurate wall thickness measurement using autointerference of circumferential Lamb wave

    International Nuclear Information System (INIS)

    In this paper, a method of accurately measuring the pipe wall thickness by using noncontact air-coupled ultrasonic transducer (NAUT) was presented. In this method, accurate measurement of angular wave number (AWN) is a key technique because the AWN is changes minutely with the wall thickness. An autointerference of the circumferential (C-) Lamb wave was used for accurate measurements of the AWN. Principle of the method was first explained. Modified method for measuring the wall thickness near a butt weld line was also proposed and its accuracy was evaluated within 6 μm error. It was also shown in the paper that wall thickness measurement was accurately carried out beyond the difference among the sensors by calibrating the frequency response of the sensors. (author)

  11. Accurate backgrounds to Higgs production at the LHC

    CERN Document Server

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  12. Selective pressures for accurate altruism targeting: evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory.

    Science.gov (United States)

    Clune, Jeff; Goldsby, Heather J; Ofria, Charles; Pennock, Robert T

    2011-03-01

    Inclusive fitness theory predicts that natural selection will favour altruist genes that are more accurate in targeting altruism only to copies of themselves. In this paper, we provide evidence from digital evolution in support of this prediction by competing multiple altruist-targeting mechanisms that vary in their accuracy in determining whether a potential target for altruism carries a copy of the altruist gene. We compete altruism-targeting mechanisms based on (i) kinship (kin targeting), (ii) genetic similarity at a level greater than that expected of kin (similarity targeting), and (iii) perfect knowledge of the presence of an altruist gene (green beard targeting). Natural selection always favoured the most accurate targeting mechanism available. Our investigations also revealed that evolution did not increase the altruism level when all green beard altruists used the same phenotypic marker. The green beard altruism levels stably increased only when mutations that changed the altruism level also changed the marker (e.g. beard colour), such that beard colour reliably indicated the altruism level. For kin- and similarity-targeting mechanisms, we found that evolution was able to stably adjust altruism levels. Our results confirm that natural selection favours altruist genes that are increasingly accurate in targeting altruism to only their copies. Our work also emphasizes that the concept of targeting accuracy must include both the presence of an altruist gene and the level of altruism it produces.

  13. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.

    Science.gov (United States)

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  14. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome

    Directory of Open Access Journals (Sweden)

    Dewey Colin N

    2011-08-01

    Full Text Available Abstract Background RNA-Seq is revolutionizing the way transcript abundances are measured. A key challenge in transcript quantification from RNA-Seq data is the handling of reads that map to multiple genes or isoforms. This issue is particularly important for quantification with de novo transcriptome assemblies in the absence of sequenced genomes, as it is difficult to determine which transcripts are isoforms of the same gene. A second significant issue is the design of RNA-Seq experiments, in terms of the number of reads, read length, and whether reads come from one or both ends of cDNA fragments. Results We present RSEM, an user-friendly software package for quantifying gene and isoform abundances from single-end or paired-end RNA-Seq data. RSEM outputs abundance estimates, 95% credibility intervals, and visualization files and can also simulate RNA-Seq data. In contrast to other existing tools, the software does not require a reference genome. Thus, in combination with a de novo transcriptome assembler, RSEM enables accurate transcript quantification for species without sequenced genomes. On simulated and real data sets, RSEM has superior or comparable performance to quantification methods that rely on a reference genome. Taking advantage of RSEM's ability to effectively use ambiguously-mapping reads, we show that accurate gene-level abundance estimates are best obtained with large numbers of short single-end reads. On the other hand, estimates of the relative frequencies of isoforms within single genes may be improved through the use of paired-end reads, depending on the number of possible splice forms for each gene. Conclusions RSEM is an accurate and user-friendly software tool for quantifying transcript abundances from RNA-Seq data. As it does not rely on the existence of a reference genome, it is particularly useful for quantification with de novo transcriptome assemblies. In addition, RSEM has enabled valuable guidance for cost

  15. UP-TORR: online tool for accurate and Up-to-Date annotation of RNAi Reagents.

    Science.gov (United States)

    Hu, Yanhui; Roesel, Charles; Flockhart, Ian; Perkins, Lizabeth; Perrimon, Norbert; Mohr, Stephanie E

    2013-09-01

    RNA interference (RNAi) is a widely adopted tool for loss-of-function studies but RNAi results only have biological relevance if the reagents are appropriately mapped to genes. Several groups have designed and generated RNAi reagent libraries for studies in cells or in vivo for Drosophila and other species. At first glance, matching RNAi reagents to genes appears to be a simple problem, as each reagent is typically designed to target a single gene. In practice, however, the reagent-gene relationship is complex. Although the sequences of oligonucleotides used to generate most types of RNAi reagents are static, the reference genome and gene annotations are regularly updated. Thus, at the time a researcher chooses an RNAi reagent or analyzes RNAi data, the most current interpretation of the RNAi reagent-gene relationship, as well as related information regarding specificity (e.g., predicted off-target effects), can be different from the original interpretation. Here, we describe a set of strategies and an accompanying online tool, UP-TORR (for Updated Targets of RNAi Reagents; www.flyrnai.org/up-torr), useful for accurate and up-to-date annotation of cell-based and in vivo RNAi reagents. Importantly, UP-TORR automatically synchronizes with gene annotations daily, retrieving the most current information available, and for Drosophila, also synchronizes with the major reagent collections. Thus, UP-TORR allows users to choose the most appropriate RNAi reagents at the onset of a study, as well as to perform the most appropriate analyses of results of RNAi-based studies.

  16. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    Science.gov (United States)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  17. Simple and accurate analytical calculation of shortest path lengths

    CERN Document Server

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  18. Highly Accurate Measurement of the Electron Orbital Magnetic Moment

    CERN Document Server

    Awobode, A M

    2015-01-01

    We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.

  19. Accurate level set method for simulations of liquid atomization☆

    Institute of Scientific and Technical Information of China (English)

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  20. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    Science.gov (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  1. Method of accurate grinding for single enveloping TI worm

    Institute of Scientific and Technical Information of China (English)

    SUN Yuehai; ZHENG Huijiang; BI Qingzhen; WANG Shuren

    2005-01-01

    TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.

  2. Equivalent method for accurate solution to linear interval equations

    Institute of Scientific and Technical Information of China (English)

    王冲; 邱志平

    2013-01-01

    Based on linear interval equations, an accurate interval finite element method for solving structural static problems with uncertain parameters in terms of optimization is discussed. On the premise of ensuring the consistency of solution sets, the original interval equations are equivalently transformed into some deterministic inequations. On this basis, calculating the structural displacement response with interval parameters is predigested to a number of deterministic linear optimization problems. The results are proved to be accurate to the interval governing equations. Finally, a numerical example is given to demonstrate the feasibility and efficiency of the proposed method.

  3. A powerful test of independent assortment that determines genome-wide significance quickly and accurately.

    Science.gov (United States)

    Stewart, W C L; Hager, V R

    2016-08-01

    In the analysis of DNA sequences on related individuals, most methods strive to incorporate as much information as possible, with little or no attention paid to the issue of statistical significance. For example, a modern workstation can easily handle the computations needed to perform a large-scale genome-wide inheritance-by-descent (IBD) scan, but accurate assessment of the significance of that scan is often hindered by inaccurate approximations and computationally intensive simulation. To address these issues, we developed gLOD-a test of co-segregation that, for large samples, models chromosome-specific IBD statistics as a collection of stationary Gaussian processes. With this simple model, the parametric bootstrap yields an accurate and rapid assessment of significance-the genome-wide corrected P-value. Furthermore, we show that (i) under the null hypothesis, the limiting distribution of the gLOD is the standard Gumbel distribution; (ii) our parametric bootstrap simulator is approximately 40 000 times faster than gene-dropping methods, and it is more powerful than methods that approximate the adjusted P-value; and, (iii) the gLOD has the same statistical power as the widely used maximum Kong and Cox LOD. Thus, our approach gives researchers the ability to determine quickly and accurately the significance of most large-scale IBD scans, which may contain multiple traits, thousands of families and tens of thousands of DNA sequences.

  4. Accurate Measurement of the Relative Abundance of Different DNA Species in Complex DNA Mixtures

    Science.gov (United States)

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-01-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription. PMID:22334570

  5. Gene finding in novel genomes

    Directory of Open Access Journals (Sweden)

    Korf Ian

    2004-05-01

    Full Text Available Abstract Background Computational gene prediction continues to be an important problem, especially for genomes with little experimental data. Results I introduce the SNAP gene finder which has been designed to be easily adaptable to a variety of genomes. In novel genomes without an appropriate gene finder, I demonstrate that employing a foreign gene finder can produce highly inaccurate results, and that the most compatible parameters may not come from the nearest phylogenetic neighbor. I find that foreign gene finders are more usefully employed to bootstrap parameter estimation and that the resulting parameters can be highly accurate. Conclusion Since gene prediction is sensitive to species-specific parameters, every genome needs a dedicated gene finder.

  6. Accurate momentum transfer cross section for the attractive Yukawa potential

    OpenAIRE

    Khrapak, Sergey

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within 2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  7. Accurate momentum transfer cross section for the attractive Yukawa potential

    OpenAIRE

    Khrapak, S. A.

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within $\\pm 2\\%$ in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  8. $H_{2}^{+}$ ion in strong magnetic field an accurate calculation

    CERN Document Server

    López, J C; Turbiner, A V

    1997-01-01

    Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.

  9. Is a Writing Sample Necessary for "Accurate Placement"?

    Science.gov (United States)

    Sullivan, Patrick; Nielsen, David

    2009-01-01

    The scholarship about assessment for placement is extensive and notoriously ambiguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "accurate placement"? Using a robust data sample of student assessment essays and ACCUPLACER test scores, we put this question to the…

  10. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  11. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  12. Accurate Period Approximation for Any Simple Pendulum Amplitude

    Institute of Scientific and Technical Information of China (English)

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  13. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless s

  14. Accurate momentum transfer cross section for the attractive Yukawa potential

    Energy Technology Data Exchange (ETDEWEB)

    Khrapak, S. A., E-mail: Sergey.Khrapak@dlr.de [Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen (Germany)

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  15. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.;

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  16. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering the...

  17. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  18. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    Science.gov (United States)

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  19. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  20. On the importance of having accurate data for astrophysical modelling

    Science.gov (United States)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  1. Overview of BioCreative II gene normalization

    NARCIS (Netherlands)

    A.A. Morgan (Alexander); Z. Lu (Zhongbing); S.X. Wang; A.M. Cohen (Aaron); J. Fluck (Juliane); P. Ruch (Patrick); A. Divoli (Anna); K. Fundel (Katrin); R. Leaman (Robert); J. Hakenberg (Jörg); C.W. Sun; H.H. Liu (Hong); R. Torres (Rafael); M. Krauthammer (Michael); W.W. Lau (William); C.N. Hsu; M.J. Schuemie (Martijn); L. Hirschman (Lynette)

    2008-01-01

    textabstractBackground: The goal of the gene normalization task is to link genes or gene products mentioned in the literature to biological databases. This is a key step in an accurate search of the biological literature. It is a challenging task, even for the human expert; genes are often described

  2. Immunoglobulin genes

    Energy Technology Data Exchange (ETDEWEB)

    Honjo, T. (Kyoto Univ. (Japan)); Alt, F.W. (Columbia Univ., Dobbs Ferry, NY (USA). Hudson Labs.); Rabbitts, T.H. (Medical Research Council, Cambridge (UK))

    1989-01-01

    This book reports on the structure, function, and expression of the genes encoding antibodies in normal and neoplastic cells. Topics covered are: B Cells; Organization and rearrangement of immunoglobin genes; Immunoglobin genes in disease; Immunoglobin gene expression; and Immunoglobin-related genes.

  3. Accurate analysis of arbitrarily-shaped helical groove waveguide

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Tao; Wei Yan-Yu; Gong Yu-Bin; Yue Ling-Na; Wang Wen-Xiang

    2006-01-01

    This paper presents a theory on accurately analysing the dispersion relation and the interaction impedance of electromagnetic waves propagating through a helical groove waveguide with arbitrary groove shape, in which the complex groove profile is synthesized by a series of rectangular steps. By introducing the influence of high-order evanescent modes on the connection of any two neighbouring steps by an equivalent susceptance under a modified admittance matching condition, the assumption of the neglecting discontinuity capacitance in previously published analysis is avoided, and the accurate dispersion equation is obtained by means of a combination of field-matching method and admittancematching technique. The validity of this theory is proved by comparison between the measurements and the numerical calculations for two kinds of helical groove waveguides with different groove shapes.

  4. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  5. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...... does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...... of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality....

  6. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    Science.gov (United States)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  7. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  8. Simple and High-Accurate Schemes for Hyperbolic Conservation Laws

    Directory of Open Access Journals (Sweden)

    Renzhong Feng

    2014-01-01

    Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.

  9. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  10. Accurate multireference study of Si3 electronic manifold

    CERN Document Server

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  11. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  12. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    Science.gov (United States)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  13. Library preparation for highly accurate population sequencing of RNA viruses

    Science.gov (United States)

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  14. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  15. Accurate Load Modeling Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  16. Accurate adjoint design sensitivities for nano metal optics.

    Science.gov (United States)

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  17. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  18. Continuous glucose monitors prove highly accurate in critically ill children

    OpenAIRE

    Bridges, Brian C.; Preissig, Catherine M; Maher, Kevin O.; Rigby, Mark R

    2010-01-01

    Introduction Hyperglycemia is associated with increased morbidity and mortality in critically ill patients and strict glycemic control has become standard care for adults. Recent studies have questioned the optimal targets for such management and reported increased rates of iatrogenic hypoglycemia in both critically ill children and adults. The ability to provide accurate, real-time continuous glucose monitoring would improve the efficacy and safety of this practice in critically ill patients...

  19. A novel automated image analysis method for accurate adipocyte quantification

    OpenAIRE

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  20. Ultra accurate collaborative information filtering via directed user similarity

    OpenAIRE

    Guo, Qiang; Song, Wen-Jun; Liu, Jian-Guo

    2014-01-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influen...

  1. Evaluation of accurate eye corner detection methods for gaze estimation

    OpenAIRE

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  2. An accurate and robust gyroscope-gased pedometer.

    Science.gov (United States)

    Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T

    2008-01-01

    Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737

  3. A highly accurate method to solve Fisher’s equation

    Indian Academy of Sciences (India)

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  4. A robust and accurate formulation of molecular and colloidal electrostatics

    Science.gov (United States)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  5. Accurate Method for Determining Adhesion of Cantilever Beams

    Energy Technology Data Exchange (ETDEWEB)

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  6. Accurate calculation of thermal noise in multilayer coating

    OpenAIRE

    Gurkovsky, Alexey; Vyatchanin, Sergey

    2010-01-01

    We derive accurate formulas for thermal fluctuations in multilayer interferometric coating taking into account light propagation inside the coating. In particular, we calculate the reflected wave phase as a function of small displacements of the boundaries between the layers using transmission line model for interferometric coating and derive formula for spectral density of reflected phase in accordance with Fluctuation-Dissipation Theorem. We apply the developed approach for calculation of t...

  7. Fast and Accurate Bilateral Filtering using Gauss-Polynomial Decomposition

    OpenAIRE

    Chaudhury, Kunal N.

    2015-01-01

    The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A widely-used form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires $O(\\sigma^2)$ operations per pixel, where $\\sigma$ is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approxi...

  8. Accurate, inexpensive testing of laser pointer power for safe operation

    International Nuclear Information System (INIS)

    An accurate, inexpensive test-bed for the measurement of optical power emitted from handheld lasers is described. The setup consists of a power meter, optical bandpass filters, an adjustable iris and self-centering lens mounts. We demonstrate this test-bed by evaluating the output power of 23 laser pointers with respect to the limits imposed by the US Code of Federal Regulations. We find a compliance rate of only 26%. A discussion of potential laser pointer hazards is included. (paper)

  9. Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    OpenAIRE

    Daftry, Shreyansh; Hoppe, Christof; Bischof, Horst

    2015-01-01

    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstruc...

  10. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry

    OpenAIRE

    Fuchs, Franz G.; Hjelmervik, Jon M.

    2014-01-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...

  11. Accurate Multisteps Traffic Flow Prediction Based on SVM

    OpenAIRE

    Zhang Mingheng; Zhen Yaobao; Hui Ganglong; Chen Gang

    2013-01-01

    Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM) are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the mul...

  12. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  13. An accurate metric for the spacetime around neutron stars

    CERN Document Server

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical propert...

  14. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  15. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  16. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  17. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    Science.gov (United States)

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid.

  18. SVM-BASED MULTI-AGENT NEGOTIATION PARTNER SELECTION%基于SVM的多Agent协商伙伴选择

    Institute of Scientific and Technical Information of China (English)

    谷琦松; 刘胜全

    2012-01-01

    According to the interactive features of Multi-agent negotiation problem, SVM (Support Vector Machine) classification method is involved in to study the Agent' s negotiation history information, extract samples from the Agent' s negotiation history information to train SVM, and combine the simulated negotiation process with one's decision-making information to predict possible results when negotiating with a particular partner and the corresponding negotiation revenue. Thus, depending on the Agent's self-interest principle, the most appropriate negotiation partner is selected. Finally, the effectiveness and superiority of the method presented in this paper are verified through simulation experiments.%根据多Agent协商问题的交互特点,引入SVM(Support Vector Machine)分类方法对Agent的协商历史信息进行学习,从Agent的协商历史信息中提取样本来训练SVM,结合模拟协商过程和己方的决策信息,预测与特定伙伴协商时可能出现的结果以及相应的协商收益,根据Agent的自利性原则,选择最合适的协商伙伴.最后,通过仿真实验验证了所提出方法的有效性和优越性.

  19. On the Use of Time–Frequency Reassignment and SVM-Based Classifier for Audio Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Souli S. Sameh

    2014-11-01

    Full Text Available In this paper, we propose a robust environmental sound spectrogram classification approach. Its purpose is surveillance and security applications based on the reassignment method and log-Gabor filters. Besides, the reassignment method is applied to the spectrogram to improve the readability of the time-frequency representation, and to assure a better localization of the signal components. Our approach includes three methods. In the first two methods, the reassigned spectrograms are passed through appropriate log-Gabor filter banks and the outputs are averaged and underwent an optimal feature selection procedure based on a mutual information criterion. The third method uses the same steps but applied only to three patches extracted from each reassigned spectrogram. The proposed approach is tested on a large database consists of 1000 sounds belonging to ten classes. The recognition is based on Multiclass Support Vector Machines.

  20. Full-polarization radar remote sensing and data mining for tropical crops mapping: a successful SVM-based classification model

    Science.gov (United States)

    Denize, J.; Corgne, S.; Todoroff, P.; LE Mezo, L.

    2015-12-01

    In Reunion, a tropical island of 2,512 km², 700 km east of Madagascar in the Indian Ocean, constrained by a rugged relief, agricultural sectors are competing in highly fragmented agricultural land constituted by heterogeneous farming systems from corporate to small-scale farming. Policymakers, planners and institutions are in dire need of reliable and updated land use references. Actually conventional land use mapping methods are inefficient under the tropic with frequent cloud cover and loosely synchronous vegetative cycles of the crops due to a constant temperature. This study aims to provide an appropriate method for the identification and mapping of tropical crops by remote sensing. For this purpose, we assess the potential of polarimetric SAR imagery associated with associated with machine learning algorithms. The method has been developed and tested on a study area of 25*25 km thanks to 6 RADARSAT-2 images in 2014 in full-polarization. A set of radar indicators (backscatter coefficient, bands ratios, indices, polarimetric decompositions (Freeman-Durden, Van zyl, Yamaguchi, Cloude and Pottier, Krogager), texture, etc.) was calculated from the coherency matrix. A random forest procedure allowed the selection of the most important variables on each images to reduce the dimension of the dataset and the processing time. Support Vector Machines (SVM), allowed the classification of these indicators based on a learning database created from field observations in 2013. The method shows an overall accuracy of 88% with a Kappa index of 0.82 for the identification of four major crops.

  1. Classification of Convective and Stratiform Cells in Meteorological Radar Images Using SVM Based on a Textural Analysis

    Institute of Scientific and Technical Information of China (English)

    Abdenasser Djafri; Boualem Haddad

    2014-01-01

    This contribution deals with the discrimination between stratiform and convective cells in meteorological radar images. This study is based on a textural analysis of the latter and their classification using a support vector machine (SVM). First, we apply different textural parameters such as energy, entropy, inertia, and local homogeneity. Through this experience, we identify the different textural features of both the stratiform and convective cells. Then, we use an SVM to find the best discriminating parameter between the two types of clouds. The main goal of this work is to better apply the Palmer and Marshall Z-R relations specific to each type of precipitation.

  2. SVM-Based Classification of Segmented Airborne LiDAR Point Clouds in Urban Areas

    OpenAIRE

    Xiaogang Ning; Xiangguo Lin; Jixian Zhang

    2013-01-01

    Object-based point cloud analysis (OBPA) is useful for information extraction from airborne LiDAR point clouds. An object-based classification method is proposed for classifying the airborne LiDAR point clouds in urban areas herein. In the process of classification, the surface growing algorithm is employed to make clustering of the point clouds without outliers, thirteen features of the geometry, radiometry, topology and echo characteristics are calculated, a support vector machine (SVM) is ...

  3. SVM-based prediction of propeptide cleavage sites in spider toxins identifies toxin innovation in an Australian tarantula.

    Directory of Open Access Journals (Sweden)

    Emily S W Wong

    Full Text Available Spider neurotoxins are commonly used as pharmacological tools and are a popular source of novel compounds with therapeutic and agrochemical potential. Since venom peptides are inherently toxic, the host spider must employ strategies to avoid adverse effects prior to venom use. It is partly for this reason that most spider toxins encode a protective proregion that upon enzymatic cleavage is excised from the mature peptide. In order to identify the mature toxin sequence directly from toxin transcripts, without resorting to protein sequencing, the propeptide cleavage site in the toxin precursor must be predicted bioinformatically. We evaluated different machine learning strategies (support vector machines, hidden Markov model and decision tree and developed an algorithm (SpiderP for prediction of propeptide cleavage sites in spider toxins. Our strategy uses a support vector machine (SVM framework that combines both local and global sequence information. Our method is superior or comparable to current tools for prediction of propeptide sequences in spider toxins. Evaluation of the SVM method on an independent test set of known toxin sequences yielded 96% sensitivity and 100% specificity. Furthermore, we sequenced five novel peptides (not used to train the final predictor from the venom of the Australian tarantula Selenotypus plumipes to test the accuracy of the predictor and found 80% sensitivity and 99.6% 8-mer specificity. Finally, we used the predictor together with homology information to predict and characterize seven groups of novel toxins from the deeply sequenced venom gland transcriptome of S. plumipes, which revealed structural complexity and innovations in the evolution of the toxins. The precursor prediction tool (SpiderP is freely available on ArachnoServer (http://www.arachnoserver.org/spiderP.html, a web portal to a comprehensive relational database of spider toxins. All training data, test data, and scripts used are available from the SpiderP website.

  4. DTC-SVM Based on PI Torque and PI Flux Controllers to Achieve High Performance of Induction Motor

    Directory of Open Access Journals (Sweden)

    Hassan Farhan Rashag

    2014-01-01

    Full Text Available The fundamental idea of direct torque control of induction machines is investigated in order to emphasize the property produced by a given voltage vector on stator flux and torque variations. The proposed control system is based on Space Vector Modulation (SVM of electrical machines, Improvement model reference adaptive system, real time of stator resistance and estimation of stator flux. The purpose of this control is to minimize electromagnetic torque and flux ripple and minimizing distortion of stator current. In this proposed method, PI torque and PI flux controller are designed to achieve estimated torque and flux with good tracking and fast response with reference torque and there is no steady state error. In addition, design of PI torque and PI flux controller are used to optimize voltages in d-q reference frame that applied to SVM. The simulation Results of proposed DTC-SVM have complete excellent performance in steady and transient states as compared with classical DTC-SVM.

  5. Detection of subsurface metallic utilities by means of a SAP technique: Comparing MUSIC- and SVM-based approaches

    Science.gov (United States)

    Meschino, Simone; Pajewski, Lara; Pastorino, Matteo; Randazzo, Andrea; Schettini, Giuseppe

    2013-10-01

    The identification of buried cables, pipes, conduits, and other cylindrical utilities is a very important task in civil engineering. In the last years, several methods have been proposed in the literature for tackling this problem. Most commonly employed approaches are based on the use of Ground Penetrating Radars, i.e., they extract the needed information about the unknown scenario starting from the electromagnetic field collected by a set of antennas. In the present paper, a statistical method, based on the use of smart antenna techniques, is used for the localization of a single buried object. In particular, two efficient algorithms for the estimation of the directions of arrival of the electromagnetic waves scattered by the targets, namely the MUltiple SIgnal Classification and the Support Vector Regression, are considered and their performances are compared.

  6. A COMPARISON STUDY OF DIFFERENT KERNEL FUNCTIONS FOR SVM-BASED CLASSIFICATION OF MULTI-TEMPORAL POLARIMETRY SAR DATA

    Directory of Open Access Journals (Sweden)

    B. Yekkehkhany

    2014-10-01

    Full Text Available In this paper, a framework is developed based on Support Vector Machines (SVM for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF. The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  7. Binary classification SVM-based algorithms with interval-valued training data using triangular and Epanechnikov kernels.

    Science.gov (United States)

    Utkin, Lev V; Chekh, Anatoly I; Zhuk, Yulia A

    2016-08-01

    Classification algorithms based on different forms of support vector machines (SVMs) for dealing with interval-valued training data are proposed in the paper. L2-norm and L∞-norm SVMs are used for constructing the algorithms. The main idea allowing us to represent the complex optimization problems as a set of simple linear or quadratic programming problems is to approximate the Gaussian kernel by the well-known triangular and Epanechnikov kernels. The minimax strategy is used to choose an optimal probability distribution from the set and to construct optimal separating functions. Numerical experiments illustrate the algorithms. PMID:27179616

  8. SVM-based CAD system for early detection of the Alzheimer's disease using kernel PCA and LDA.

    Science.gov (United States)

    López, M M; Ramírez, J; Górriz, J M; Alvarez, I; Salas-Gonzalez, D; Segovia, F; Chaves, R

    2009-10-30

    Single-photon emission tomography (SPECT) imaging has been widely used to guide clinicians in the early Alzheimer's disease (AD) diagnosis challenge. However, AD detection still relies on subjective steps carried out by clinicians, which entail in some way subjectivity to the final diagnosis. In this work, kernel principal component analysis (PCA) and linear discriminant analysis (LDA) are applied on functional images as dimension reduction and feature extraction techniques, which are subsequently used to train a supervised support vector machine (SVM) classifier. The complete methodology provides a kernel-based computer-aided diagnosis (CAD) system capable to distinguish AD from normal subjects with 92.31% accuracy rate for a SPECT database consisting of 91 patients. The proposed methodology outperforms voxels-as-features (VAF) that was considered as baseline approach, which yields 80.22% for the same SPECT database.

  9. SVM-Based Prediction of Propeptide Cleavage Sites in Spider Toxins Identifies Toxin Innovation in an Australian Tarantula

    OpenAIRE

    Wong, Emily S. W.; Hardy, Margaret C.; David Wood; Timothy Bailey; Glenn F. King

    2013-01-01

    Spider neurotoxins are commonly used as pharmacological tools and are a popular source of novel compounds with therapeutic and agrochemical potential. Since venom peptides are inherently toxic, the host spider must employ strategies to avoid adverse effects prior to venom use. It is partly for this reason that most spider toxins encode a protective proregion that upon enzymatic cleavage is excised from the mature peptide. In order to identify the mature toxin sequence directly from toxin tran...

  10. SVM-based prediction of propeptide cleavage sites in spider toxins identifies toxin innovation in an Australian tarantula.

    Science.gov (United States)

    Wong, Emily S W; Hardy, Margaret C; Wood, David; Bailey, Timothy; King, Glenn F

    2013-01-01

    Spider neurotoxins are commonly used as pharmacological tools and are a popular source of novel compounds with therapeutic and agrochemical potential. Since venom peptides are inherently toxic, the host spider must employ strategies to avoid adverse effects prior to venom use. It is partly for this reason that most spider toxins encode a protective proregion that upon enzymatic cleavage is excised from the mature peptide. In order to identify the mature toxin sequence directly from toxin transcripts, without resorting to protein sequencing, the propeptide cleavage site in the toxin precursor must be predicted bioinformatically. We evaluated different machine learning strategies (support vector machines, hidden Markov model and decision tree) and developed an algorithm (SpiderP) for prediction of propeptide cleavage sites in spider toxins. Our strategy uses a support vector machine (SVM) framework that combines both local and global sequence information. Our method is superior or comparable to current tools for prediction of propeptide sequences in spider toxins. Evaluation of the SVM method on an independent test set of known toxin sequences yielded 96% sensitivity and 100% specificity. Furthermore, we sequenced five novel peptides (not used to train the final predictor) from the venom of the Australian tarantula Selenotypus plumipes to test the accuracy of the predictor and found 80% sensitivity and 99.6% 8-mer specificity. Finally, we used the predictor together with homology information to predict and characterize seven groups of novel toxins from the deeply sequenced venom gland transcriptome of S. plumipes, which revealed structural complexity and innovations in the evolution of the toxins. The precursor prediction tool (SpiderP) is freely available on ArachnoServer (http://www.arachnoserver.org/spiderP.html), a web portal to a comprehensive relational database of spider toxins. All training data, test data, and scripts used are available from the SpiderP website. PMID:23894279

  11. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    Science.gov (United States)

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  12. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  13. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Science.gov (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  14. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  15. Accurate studies on dissociation energies of diatomic molecules

    Institute of Scientific and Technical Information of China (English)

    SUN; WeiGuo; FAN; QunChao

    2007-01-01

    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  16. Accurate strand-specific quantification of viral RNA.

    Directory of Open Access Journals (Sweden)

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  17. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik;

    2011-01-01

    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  18. Accurate characterisation of post moulding shrinkage of polymer parts

    DEFF Research Database (Denmark)

    Neves, L. C.; De Chiffre, L.; González-Madruga, D.;

    2015-01-01

    The work deals with experimental determination of the shrinkage of polymer parts after injection moulding. A fixture for length measurements on 8 parts at the same time was designed and manufactured in Invar, mounted with 8 electronic gauges, and provided with 3 temperature sensors. The fixture...... were compensated with respect to the effect from temperature variations during the measurements. Prediction of the length after stabilisation was carried out by fitting data at different stages of shrinkage. Uncertainty estimations were carried out and a procedure for the accurate characterisation...... of post moulding shrinkage of polymer parts was developed. Expanded uncertainties (k=2) of 3 μm were obtained....

  19. Imaging tests for accurate diagnosis of acute biliary pancreatitis

    DEFF Research Database (Denmark)

    Surlin, Valeriu; Săftoiu, Adrian; Dumitrescu, Daniela

    2014-01-01

    Gallstones represent the most frequent aetiology of acute pancreatitis in many statistics all over the world, estimated between 40%-60%. Accurate diagnosis of acute biliary pancreatitis (ABP) is of outmost importance because clearance of lithiasis [gallbladder and common bile duct (CBD)] rules out...... for the intraoperative diagnosis of choledocholithiasis. Routine exploration of the CBD in cases of patients scheduled for cholecystectomy after an attack of ABP was not proven useful. A significant rate of the so-called idiopathic pancreatitis is actually caused by microlithiasis and/or biliary sludge. In conclusion...

  20. Accurate measurement of ultrasonic velocity by eliminating the diffraction effect

    Institute of Scientific and Technical Information of China (English)

    WEI Tingcun

    2003-01-01

    The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.

  1. Accurate Programming: Thinking about programs in terms of properties

    Directory of Open Access Journals (Sweden)

    Walid Taha

    2011-09-01

    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  2. An accurate RLGC circuit model for dual tapered TSV structure

    International Nuclear Information System (INIS)

    A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)

  3. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  4. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    Science.gov (United States)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  5. Flexible and accurate detection of genomic copy-number changes from aCGH.

    Directory of Open Access Journals (Sweden)

    Oscar M Rueda

    2007-06-01

    Full Text Available Genomic DNA copy-number alterations (CNAs are associated with complex diseases, including cancer: CNAs are indeed related to tumoral grade, metastasis, and patient survival. CNAs discovered from array-based comparative genomic hybridization (aCGH data have been instrumental in identifying disease-related genes and potential therapeutic targets. To be immediately useful in both clinical and basic research scenarios, aCGH data analysis requires accurate methods that do not impose unrealistic biological assumptions and that provide direct answers to the key question, "What is the probability that this gene/region has CNAs?" Current approaches fail, however, to meet these requirements. Here, we introduce reversible jump aCGH (RJaCGH, a new method for identifying CNAs from aCGH; we use a nonhomogeneous hidden Markov model fitted via reversible jump Markov chain Monte Carlo; and we incorporate model uncertainty through Bayesian model averaging. RJaCGH provides an estimate of the probability that a gene/region has CNAs while incorporating interprobe distance and the capability to analyze data on a chromosome or genome-wide basis. RJaCGH outperforms alternative methods, and the performance difference is even larger with noisy data and highly variable interprobe distance, both commonly found features in aCGH data. Furthermore, our probabilistic method allows us to identify minimal common regions of CNAs among samples and can be extended to incorporate expression data. In summary, we provide a rigorous statistical framework for locating genes and chromosomal regions with CNAs with potential applications to cancer and other complex human diseases.

  6. Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising

    CERN Document Server

    Donoho, David; Montanari, Andrea

    2011-01-01

    Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula precisely delineating the allowable degree of of undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar shrinkers (soft thresholding, positive soft thresholding and capping). This paper gives several examples including scalar shrinkers not derivable from convex optimization -- the firm shrinkage nonlinearity and the minimax} nonlinearity -- and also nonscalar denoisers -- block thresholding (both block soft and block James-Stein), monotone regression, and total variation minimization. Let the variables \\epsilon = k/N and \\delta = n/N denote the generalized sparsity and undersampling fractions for sampling the k-gene...

  7. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Matthew Almond Sochor

    2014-07-01

    Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.

  8. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Science.gov (United States)

    Ramkissoon, Kevin R; Miller, Jennifer K; Ojha, Sunil; Watson, Douglas S; Bomar, Martha G; Galande, Amit K; Shearer, Alexander G

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  9. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  10. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    International Nuclear Information System (INIS)

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  11. Plant diversity accurately predicts insect diversity in two tropical landscapes.

    Science.gov (United States)

    Zhang, Kai; Lin, Siliang; Ji, Yinqiu; Yang, Chenxue; Wang, Xiaoyang; Yang, Chunyan; Wang, Hesheng; Jiang, Haisheng; Harrison, Rhett D; Yu, Douglas W

    2016-09-01

    Plant diversity surely determines arthropod diversity, but only moderate correlations between arthropod and plant species richness had been observed until Basset et al. (Science, 338, 2012 and 1481) finally undertook an unprecedentedly comprehensive sampling of a tropical forest and demonstrated that plant species richness could indeed accurately predict arthropod species richness. We now require a high-throughput pipeline to operationalize this result so that we can (i) test competing explanations for tropical arthropod megadiversity, (ii) improve estimates of global eukaryotic species diversity, and (iii) use plant and arthropod communities as efficient proxies for each other, thus improving the efficiency of conservation planning and of detecting forest degradation and recovery. We therefore applied metabarcoding to Malaise-trap samples across two tropical landscapes in China. We demonstrate that plant species richness can accurately predict arthropod (mostly insect) species richness and that plant and insect community compositions are highly correlated, even in landscapes that are large, heterogeneous and anthropogenically modified. Finally, we review how metabarcoding makes feasible highly replicated tests of the major competing explanations for tropical megadiversity. PMID:27474399

  12. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  13. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  14. Accurate 3D quantification of the bronchial parameters in MDCT

    Science.gov (United States)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  15. Simple and accurate optical height sensor for wafer inspection systems

    Science.gov (United States)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  16. Accurate measurement of streamwise vortices using dual-plane PIV

    Energy Technology Data Exchange (ETDEWEB)

    Waldman, Rye M.; Breuer, Kenneth S. [Brown University, School of Engineering, Providence, RI (United States)

    2012-11-15

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers. (orig.)

  17. Towards an accurate determination of the age of the Universe

    CERN Document Server

    Jiménez, R

    1998-01-01

    In the past 40 years a considerable effort has been focused in determining the age of the Universe at zero redshift using several stellar clocks. In this review I will describe the best theoretical methods to determine the age of the oldest Galactic Globular Clusters (GC). I will also argue that a more accurate age determination may come from passively evolving high-redshift ellipticals. In particular, I will review two new methods to determine the age of GC. These two methods are more accurate than the classical isochrone fitting technique. The first method is based on the morphology of the horizontal branch and is independent of the distance modulus of the globular cluster. The second method uses a careful binning of the stellar luminosity function which determines simultaneously the distance and age of the GC. It is found that the oldest GCs have an age of $13.5 \\pm 2$ Gyr. The absolute minimum age for the oldest GCs is 10.5 Gyr and the maximum is 16.0 Gyr (with 99% confidence). Therefore, an Einstein-De S...

  18. KFM: a homemade yet accurate and dependable fallout meter

    International Nuclear Information System (INIS)

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  19. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  20. More-Accurate Model of Flows in Rocket Injectors

    Science.gov (United States)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  1. Accurate Runout Measurement for HDD Spinning Motors and Disks

    Science.gov (United States)

    Jiang, Quan; Bi, Chao; Lin, Song

    As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.

  2. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  3. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  4. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants

    International Nuclear Information System (INIS)

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm-1). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10-9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α-1 = 137.03599884 (91) with a relative uncertainty of 6.7*10-9. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  5. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  6. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  7. Accurate object tracking system by integrating texture and depth cues

    Science.gov (United States)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  8. A new accurate pill recognition system using imprint information

    Science.gov (United States)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  9. Analytical method to accurately predict LMFBR core flow distribution

    International Nuclear Information System (INIS)

    An accurate and detailed representation of the flow distribution in LMFBR cores is very important as the starting point and basis of the thermal and structural core design. Previous experience indicated that the steady state and transient core design is as good as the core orificing; thus, a new orificing philosophy satisfying a priori all design constraints was developd. However, optimized orificing is a necessary, but not sufficient condition for achieving the optimum core flow distribution, which is affected by the hydraulic characteristics of the remainder of the primary system. Consequently, an analytical model of the overall primary system was developed, resulting in the CATFISH computer code, which, even though specifically written for LMFBRs, can be used for any reactor employing ducted assemblies

  10. Accurate Calculation of Fringe Fields in the LHC Main Dipoles

    CERN Document Server

    Kurz, S; Siegel, N

    2000-01-01

    The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.

  11. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Science.gov (United States)

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  12. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  13. Accurate Performance Analysis of Opportunistic Decode-and-Forward Relaying

    CERN Document Server

    Tourki, Kamel; Alouni, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures.

  14. Accurate macroscale modelling of spatial dynamics in multiple dimensions

    CERN Document Server

    Roberts, A ~J; Bunder, J ~E

    2011-01-01

    Developments in dynamical systems theory provides new support for the macroscale modelling of pdes and other microscale systems such as Lattice Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically resolving subgrid microscale dynamics the dynamical systems approach constructs accurate closures of macroscale discretisations of the microscale system. Here we specifically explore reaction-diffusion problems in two spatial dimensions as a prototype of generic systems in multiple dimensions. Our approach unifies into one the modelling of systems by a type of finite elements, and the `equation free' macroscale modelling of microscale simulators efficiently executing only on small patches of the spatial domain. Centre manifold theory ensures that a closed model exist on the macroscale grid, is emergent, and is systematically approximated. Dividing space either into overlapping finite elements or into spatially separated small patches, the specially crafted inter-element\\slash patch coupling als...

  15. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Science.gov (United States)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  16. Calculation of Accurate Hexagonal Discontinuity Factors for PARCS

    Energy Technology Data Exchange (ETDEWEB)

    Pounders. J., Bandini, B. R. , Xu, Y, and Downar, T. J.

    2007-11-01

    In this study we derive a methodology for calculating discontinuity factors consistent with the Triangle-based Polynomial Expansion Nodal (TPEN) method implemented in PARCS for hexagonal reactor geometries. The accuracy of coarse-mesh nodal methods is greatly enhanced by permitting flux discontinuities at node boundaries, but the practice of calculating discontinuity factors from infinite-medium (zero-current) single bundle calculations may not be sufficiently accurate for more challenging problems in which there is a large amount of internodal neutron streaming. The authors therefore derive a TPEN-based method for calculating discontinuity factors that are exact with respect to generalized equivalence theory. The method is validated by reproducing the reference solution for a small hexagonal core.

  17. Accurate monitoring of large aligned objects with videometric techniques

    CERN Document Server

    Klumb, F; Grussenmeyer, P

    1999-01-01

    This paper describes a new videometric technique designed to monitor the deformations and misalignments of large vital components in the centre of a future particle detector. It relies on a geometrical principle called "reciprocal collimation" of two CCD cameras: the combination of the video devices in pair gives rise to a network of well located reference lines that surround the object to be surveyed. Each observed point, which in practice is a bright point-like light- source, is accurately located with respect to this network of neighbouring axes. Adjustment calculations provide the three- dimensional position of the object fitted with various light-sources. An experimental test-bench, equipped with four cameras, has corroborated the precision predicted by previous simulations of the system. (11 refs).

  18. Accurate Enthalpies of Formation of Astromolecules: Energy, Stability and Abundance

    CERN Document Server

    Etim, Emmanuel E

    2016-01-01

    Accurate enthalpies of formation are reported for known and potential astromolecules using high level ab initio quantum chemical calculations. A total of 130 molecules comprising of 31 isomeric groups and 24 cyanide/isocyanide pairs with atoms ranging from 3 to 12 have been considered. The results show an interesting, surprisingly not well explored, relationship between energy, stability and abundance (ESA) existing among these molecules. Among the isomeric species, isomers with lower enthalpies of formation are more easily observed in the interstellar medium compared to their counterparts with higher enthalpies of formation. Available data in literature confirm the high abundance of the most stable isomer over other isomers in the different groups considered. Potential for interstellar hydrogen bonding accounts for the few exceptions observed. Thus, in general, it suffices to say that the interstellar abundances of related species are directly proportional to their stabilities. The immediate consequences of ...

  19. Methods for Accurate Free Flight Measurement of Drag Coefficients

    CERN Document Server

    Courtney, Elya; Courtney, Michael

    2015-01-01

    This paper describes experimental methods for free flight measurement of drag coefficients to an accuracy of approximately 1%. There are two main methods of determining free flight drag coefficients, or equivalent ballistic coefficients: 1) measuring near and far velocities over a known distance and 2) measuring a near velocity and time of flight over a known distance. Atmospheric conditions must also be known and nearly constant over the flight path. A number of tradeoffs are important when designing experiments to accurately determine drag coefficients. The flight distance must be large enough so that the projectile's loss of velocity is significant compared with its initial velocity and much larger than the uncertainty in the near and/or far velocity measurements. On the other hand, since drag coefficients and ballistic coefficients both depend on velocity, the change in velocity over the flight path should be small enough that the average drag coefficient over the path (which is what is really determined)...

  20. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  1. Accurate numerical solution of compressible, linear stability equations

    Science.gov (United States)

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  2. Accurate volume measurement system for plutonium nitrate solution

    International Nuclear Information System (INIS)

    An accurate volume measurement system for a large amount of plutonium nitrate solution stored in a reprocessing or a conversion plant has been developed at the Plutonium Conversion Development Facility (PCDF) in the Power Reactor and Nuclear Fuel Development Corp. (PNC) Tokai Works. A pair of differential digital quartz pressure transducers is utilized in the volume measurement system. To obtain high accuracy, it is important that the non-linearity of the transducer is minimized within the measurement range, the zero point is stabilized, and the damping property of the pneumatic line is designed to minimize pressure oscillation. The accuracy of the pressure measurement can always be within 2Pa with re-calibration once a year. In the PCDF, the overall uncertainty of the volume measurement has been evaluated to be within 0.2 %. This system has been successfully applied to the Japanese government's and IAEA's routine inspection since 1984. (author)

  3. Accurate finite difference methods for time-harmonic wave propagation

    Science.gov (United States)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  4. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D; Äystö, J; Beck, D

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  5. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B

    2008-02-11

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems.

  6. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  7. Redundancy-Free, Accurate Analytical Center Machine for Classification

    Institute of Scientific and Technical Information of China (English)

    ZHENGFanzi; QIUZhengding; LengYonggang; YueJianhai

    2005-01-01

    Analytical center machine (ACM) has remarkable generalization performance based on analytical center of version space and outperforms SVM. From the analysis of geometry of machine learning and principle of ACM, it is showed that some training patterns are redundant to the definition of version space. Redundant patterns push ACM classifier away from analytical center of the prime version space so that the generalization performance degrades, at the same time redundant patterns slow down the classifier and reduce the efficiency of storage. Thus, an incremental algorithm is proposed to remove redundant patterns and embed into the frame of ACM that yields a Redundancy free accurate-Analytical center machine (RFA-ACM) for classification. Experiments with Heart, Thyroid, Banana datasets demonstrate the validity of RFA-ACM.

  8. Accurate Parallel Algorithm for Adini Nonconforming Finite Element

    Institute of Scientific and Technical Information of China (English)

    罗平; 周爱辉

    2003-01-01

    Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.

  9. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    CERN Document Server

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  10. Efficient and Accurate Indoor Localization Using Landmark Graphs

    Science.gov (United States)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  11. Natural orbital expansions of highly accurate three-body wavefunctions

    International Nuclear Information System (INIS)

    Natural orbital expansions are considered for highly accurate three-body wavefunctions written in the relative coordinates r32, r31 and r21. Our present method is applied to the ground S(L = 0) -state wavefunctions of the Ps- and inftyH- ions. Our best variational energies computed herein for these systems are E(Ps-) = -0.262 005 070 232 980 107 7666 au and E(inftyH- =-0.5277510165443771965865 au, respectively. The variational wavefunctions determined for these systems contain between 2000 and 4200 exponential basis functions. In general, the natural orbital expansions of these functions are compact and rapidly convergent functions, which are represented as linear combinations of some relatively simple functions. The natural orbitals can be very useful in various applications, including photodetachment and scattering problems

  12. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  13. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  14. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  15. Can a surgeon drill accurately at a specified angle?

    Science.gov (United States)

    Brioschi, Valentina; Cook, Jodie; Arthurs, Gareth I

    2016-01-01

    Objectives To investigate whether a surgeon can drill accurately a specified angle and whether surgeon experience, task repetition, drill bit size and perceived difficulty influence drilling angle accuracy. Methods The sample population consisted of final-year students (n=25), non-specialist veterinarians (n=22) and board-certified orthopaedic surgeons (n=8). Each participant drilled a hole twice in a horizontal oak plank at 30°, 45°, 60°, 80°, 85° and 90° angles with either a 2.5  or a 3.5 mm drill bit. Participants then rated the perceived difficulty to drill each angle. The true angle of each hole was measured using a digital goniometer. Results Greater drilling accuracy was achieved at angles closer to 90°. An error of ≤±4° was achieved by 84.5 per cent of participants drilling a 90° angle compared with approximately 20 per cent of participants drilling a 30–45° angle. There was no effect of surgeon experience, task repetition or drill bit size on the mean error for intended versus achieved angle. Increased perception of difficulty was associated with the more acute angles and decreased accuracy, but not experience level. Clinical significance This study shows that surgeon ability to drill accurately (within ±4° error) is limited, particularly at angles ≤60°. In situations where drill angle is critical, use of computer-assisted navigation or custom-made drill guides may be preferable. PMID:27547423

  16. Population variability complicates the accurate detection of climate change responses.

    Science.gov (United States)

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. PMID:26725404

  17. An accurate and portable solid state neutron rem meter

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2013-08-11

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.

  18. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Directory of Open Access Journals (Sweden)

    Sun Yanni

    2011-05-01

    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.

  19. Accurate Measurement of the Effects of All Amino-Acid Mutations on Influenza Hemagglutinin

    Science.gov (United States)

    Doud, Michael B.; Bloom, Jesse D.

    2016-01-01

    Influenza genes evolve mostly via point mutations, and so knowing the effect of every amino-acid mutation provides information about evolutionary paths available to the virus. We and others have combined high-throughput mutagenesis with deep sequencing to estimate the effects of large numbers of mutations to influenza genes. However, these measurements have suffered from substantial experimental noise due to a variety of technical problems, the most prominent of which is bottlenecking during the generation of mutant viruses from plasmids. Here we describe advances that ameliorate these problems, enabling us to measure with greatly improved accuracy and reproducibility the effects of all amino-acid mutations to an H1 influenza hemagglutinin on viral replication in cell culture. The largest improvements come from using a helper virus to reduce bottlenecks when generating viruses from plasmids. Our measurements confirm at much higher resolution the results of previous studies suggesting that antigenic sites on the globular head of hemagglutinin are highly tolerant of mutations. We also show that other regions of hemagglutinin—including the stalk epitopes targeted by broadly neutralizing antibodies—have a much lower inherent capacity to tolerate point mutations. The ability to accurately measure the effects of all influenza mutations should enhance efforts to understand and predict viral evolution. PMID:27271655

  20. Selection of low-variance expressed Malus x domestica (apple) genes for use as quantitative PCR reference genes (housekeepers)

    Science.gov (United States)

    To accurately measure gene expression using PCR-based approaches, there is the need for reference genes that have low variance in expression (housekeeping genes) to normalise the data for RNA quantity and quality. For non-model species such as Malus x domestica (apples), previously, the selection of...

  1. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    Science.gov (United States)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable

  2. A Complete and Accurate Ab Initio Repeat Finding Algorithm.

    Science.gov (United States)

    Lian, Shuaibin; Chen, Xinwu; Wang, Peng; Zhang, Xiaoli; Dai, Xianhua

    2016-03-01

    It has become clear that repetitive sequences have played multiple roles in eukaryotic genome evolution including increasing genetic diversity through mutation, changes in gene expression and facilitating generation of novel genes. However, identification of repetitive elements can be difficult in the ab initio manner. Currently, some classical ab initio tools of finding repeats have already presented and compared. The completeness and accuracy of detecting repeats of them are little pool. To this end, we proposed a new ab initio repeat finding tool, named HashRepeatFinder, which is based on hash index and word counting. Furthermore, we assessed the performances of HashRepeatFinder with other two famous tools, such as RepeatScout and Repeatfinder, in human genome data hg19. The results indicated the following three conclusions: (1) The completeness of HashRepeatFinder is the best one among these three compared tools in almost all chromosomes, especially in chr9 (8 times of RepeatScout, 10 times of Repeatfinder); (2) in terms of detecting large repeats, HashRepeatFinder also performed best in all chromosomes, especially in chr3 (24 times of RepeatScout and 250 times of Repeatfinder) and chr19 (12 times of RepeatScout and 60 times of Repeatfinder); (3) in terms of accuracy, HashRepeatFinder can merge the abundant repeats with high accuracy. PMID:26272474

  3. Accurate in-line CD metrology for nanometer semiconductor manufacturing

    Science.gov (United States)

    Perng, Baw-Ching; Shieh, Jyu-Horng; Jang, S.-M.; Liang, M.-S.; Huang, Renee; Chen, Li-Chien; Hwang, Ruey-Lian; Hsu, Joe; Fong, David

    2006-03-01

    The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies. While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness (LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material. Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement. Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to 400nm. CD measurements of poly ADI and low k trenches are also

  4. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  5. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  6. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  7. An accurate δf method for neoclassical transport calculation

    International Nuclear Information System (INIS)

    A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  8. Accurate mass measurements of very short-lived nuclei

    CERN Document Server

    Herfurth, F; Ames, F; Audi, G; Beck, D; Blaum, K; Bollen, G; Engels, O; Kluge, H J; Lunney, M D; Moores, R B; Oinonen, M; Sauvan, E; Bolle, C A; Scheidenberger, C; Schwarz, S; Sikler, G; Weber, C

    2002-01-01

    Mass measurements of /sup 34/Ar, /sup 73-78/Kr, and /sup 74,76/Rb were performed with the Penning-trap mass spectrometer ISOLTRAP. Very accurate Q/sub EC/-values are needed for the investigations of the F /sub t/-value of 0/sup +/ to 0/sup +/ nuclear beta -decays used to test the standard model predictions for weak interactions. The necessary accuracy on the Q/sub EC/-value requires the mass of mother and daughter nuclei to be measured with delta m/m

  9. Progress in Fast, Accurate Multi-scale Climate Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Collins, William D [Lawrence Berkeley National Laboratory (LBNL); Johansen, Hans [Lawrence Berkeley National Laboratory (LBNL); Evans, Katherine J [ORNL; Woodward, Carol S. [Lawrence Livermore National Laboratory (LLNL); Caldwell, Peter [Lawrence Livermore National Laboratory (LLNL)

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  10. Accurate measurement of liquid transport through nanoscale conduits

    Science.gov (United States)

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-04-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems.

  11. Accurate estimation of third-order moments from turbulence measurements

    Directory of Open Access Journals (Sweden)

    J. J. Podesta

    2009-02-01

    Full Text Available Politano and Pouquet's law, a generalization of Kolmogorov's four-fifths law to incompressible MHD, makes it possible to measure the energy cascade rate in incompressible MHD turbulence by means of third-order moments. In hydrodynamics, accurate measurement of third-order moments requires large amounts of data because the probability distributions of velocity-differences are nearly symmetric and the third-order moments are relatively small. Measurements of the energy cascade rate in solar wind turbulence have recently been performed for the first time, but without careful consideration of the accuracy or statistical uncertainty of the required third-order moments. This paper investigates the statistical convergence of third-order moments as a function of the sample size N. It is shown that the accuracy of the third-moment <(δ v||3> depends on the number of correlation lengths spanned by the data set and a method of estimating the statistical uncertainty of the third-moment is developed. The technique is illustrated using both wind tunnel data and solar wind data.

  12. Faster and More Accurate Sequence Alignment with SNAP

    CERN Document Server

    Zaharia, Matei; Curtis, Kristal; Fox, Armando; Patterson, David; Shenker, Scott; Stoica, Ion; Karp, Richard M; Sittler, Taylor

    2011-01-01

    We present the Scalable Nucleotide Alignment Program (SNAP), a new short and long read aligner that is both more accurate (i.e., aligns more reads with fewer errors) and 10-100x faster than state-of-the-art tools such as BWA. Unlike recent aligners based on the Burrows-Wheeler transform, SNAP uses a simple hash index of short seed sequences from the genome, similar to BLAST's. However, SNAP greatly reduces the number and cost of local alignment checks performed through several measures: it uses longer seeds to reduce the false positive locations considered, leverages larger memory capacities to speed index lookup, and excludes most candidate locations without fully computing their edit distance to the read. The result is an algorithm that scales well for reads from one hundred to thousands of bases long and provides a rich error model that can match classes of mutations (e.g., longer indels) that today's fast aligners ignore. We calculate that SNAP can align a dataset with 30x coverage of a human genome in le...

  13. In pursuit of accurate timekeeping: Liverpool and Victorian electrical horology.

    Science.gov (United States)

    Ishibashi, Yuto

    2014-10-01

    This paper explores how nineteenth-century Liverpool became such an advanced city with regard to public timekeeping, and the wider impact of this on the standardisation of time. From the mid-1840s, local scientists and municipal bodies in the port city were engaged in improving the ways in which accurate time was communicated to ships and the general public. As a result, Liverpool was the first British city to witness the formation of a synchronised clock system, based on an invention by Robert Jones. His method gained a considerable reputation in the scientific and engineering communities, which led to its subsequent replication at a number of astronomical observatories such as Greenwich and Edinburgh. As a further key example of developments in time-signalling techniques, this paper also focuses on the time ball established in Liverpool by the Electric Telegraph Company in collaboration with George Biddell Airy, the Astronomer Royal. This is a particularly significant development because, as the present paper illustrates, one of the most important technologies in measuring the accuracy of the Greenwich time signal took shape in the experimental operation of the time ball. The inventions and knowledge which emerged from the context of Liverpool were vital to the transformation of public timekeeping in Victorian Britain.

  14. A highly accurate ab initio potential energy surface for methane

    Science.gov (United States)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-01

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  15. How complete and accurate is meningococcal disease notification?

    Science.gov (United States)

    Breen, E; Ghebrehewet, S; Regan, M; Thomson, A P J

    2004-12-01

    Effective public health control of meningococcal disease (meningococcal meningitis and septicaemia) is dependent on complete, accurate and speedy notification. Using capture-recapture techniques this study assesses the completeness, accuracy and timeliness of meningococcal notification in a health authority. The completeness of meningococcal disease notification was 94.8% (95% confidence interval 93.2% to 96.2%); 91.2% of cases in 2001 were notified within 24 hours of diagnosis, but 28.0% of notifications in 2001 were false positives. Clinical staff need to be aware of the public health implications of a notification of meningococcal disease, and of failure of, or delay in notification. Incomplete or delayed notification not only leads to inaccurate data collection but also means that important public health measures may not be taken. A clinical diagnosis of meningococcal disease should be carefully considered between the clinician and the consultant in communicable disease control (CCDC). Otherwise, prophylaxis may be given unnecessarily, disease incidence inflated, and the benefits of control measures underestimated. Consultants in communicable disease control (CCDCs), in conjunction with clinical staff, should de-notify meningococcal disease if the diagnosis changes.

  16. An accurate {delta}f method for neoclassical transport calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1999-03-01

    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  17. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  18. Reusable, robust, and accurate laser-generated photonic nanosensor.

    Science.gov (United States)

    Yetisen, Ali K; Montelongo, Yunuen; da Cruz Vasconcellos, Fernando; Martinez-Hurtado, J L; Neupane, Sankalpa; Butt, Haider; Qasim, Malik M; Blyth, Jeffrey; Burling, Keith; Carmody, J Bryan; Evans, Mark; Wilkinson, Timothy D; Kubota, Lauro T; Monteiro, Michael J; Lowe, Christopher R

    2014-06-11

    Developing noninvasive and accurate diagnostics that are easily manufactured, robust, and reusable will provide monitoring of high-risk individuals in any clinical or point-of-care environment. We have developed a clinically relevant optical glucose nanosensor that can be reused at least 400 times without a compromise in accuracy. The use of a single 6 ns laser (λ = 532 nm, 200 mJ) pulse rapidly produced off-axis Bragg diffraction gratings consisting of ordered silver nanoparticles embedded within a phenylboronic acid-functionalized hydrogel. This sensor exhibited reversible large wavelength shifts and diffracted the spectrum of narrow-band light over the wavelength range λpeak ≈ 510-1100 nm. The experimental sensitivity of the sensor permits diagnosis of glucosuria in the urine samples of diabetic patients with an improved performance compared to commercial high-throughput urinalysis devices. The sensor response was achieved within 5 min, reset to baseline in ∼10 s. It is anticipated that this sensing platform will have implications for the development of reusable, equipment-free colorimetric point-of-care diagnostic devices for diabetes screening. PMID:24844116

  19. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    Science.gov (United States)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  20. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    Directory of Open Access Journals (Sweden)

    Miao Liu

    2015-10-01

    Full Text Available In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system.

  1. Accurate transition rates for intercombination lines of singly ionized nitrogen

    International Nuclear Information System (INIS)

    The transition energies and rates for the 2s22p23P1,2-2s2p35S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p31,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  2. Accurate measurement of RF exposure from emerging wireless communication systems

    International Nuclear Information System (INIS)

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  3. An Accurate ANFIS-based MPPT for Solar PV System

    Directory of Open Access Journals (Sweden)

    Ahmed Bin-Halabi

    2014-06-01

    Full Text Available It has been found from the literature review that the ANFIS-based maximum power point tracking (MPPT techniques are very fast and accurate in tracking the MPP at any weather conditions, and they have smaller power losses if trained well. Unfortunately, this is true in simulation, but in practice they do not work very well because they do not take aging of solar cells as well as the effect of dust and shading into account. In other words, the solar irradiance measured by solar irradiance sensor is not always the same irradiance that influences the PV module. The main objective of this work is to design and practically implement an MPPT system for solar PV with high speed, high efficiency, and relatively easy implementation in order to improve the efficiency of solar energy conversion. This MPPT system is based on ANFIS technique. The contribution of this research is eliminating the need of irradiance sensor while having the same adequate performance obtained by the ANFIS with irradiance sensor, both, in simulation as well as in experimental implementation. The proposed technique has been validated by comparing the practical results of the implemented setup to simulations. Experimental results have showed good agreement with simulation results.

  4. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    Science.gov (United States)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  5. Accurate ionization potential of semiconductors from efficient density functional calculations

    Science.gov (United States)

    Ye, Lin-Hui

    2016-07-01

    Despite its huge successes in total-energy-related applications, the Kohn-Sham scheme of density functional theory cannot get reliable single-particle excitation energies for solids. In particular, it has not been able to calculate the ionization potential (IP), one of the most important material parameters, for semiconductors. We illustrate that an approximate exact-exchange optimized effective potential (EXX-OEP), the Becke-Johnson exchange, can be used to largely solve this long-standing problem. For a group of 17 semiconductors, we have obtained the IPs to an accuracy similar to that of the much more sophisticated G W approximation (GWA), with the computational cost of only local-density approximation/generalized gradient approximation. The EXX-OEP, therefore, is likely as useful for solids as for finite systems. For solid surfaces, the asymptotic behavior of the vx c has effects similar to those of finite systems which, when neglected, typically cause the semiconductor IPs to be underestimated. This may partially explain why standard GWA systematically underestimates the IPs and why using the same GWA procedures has not been able to get an accurate IP and band gap at the same time.

  6. Accurate Complex Systems Design: Integrating Serious Games with Petri Nets

    Directory of Open Access Journals (Sweden)

    Kirsten Sinclair

    2016-03-01

    Full Text Available Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   

  7. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    International Nuclear Information System (INIS)

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C31H47NO4, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C29H47NO3. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies

  8. Accurate parameters of 93 solar-type Kepler targets

    CERN Document Server

    Bruntt, H; Smalley, B; Chaplin, W J; Verner, G A; Bedding, T R; Catala, C; Gazzano, J -C; Molenda-Zakowicz, J; Thygesen, A O; Uytterhoeven, K; Hekker, S; Huber, D; Karoff, C; Mathur, S; Mosser, B; Appourchaux, T; Campante, T L; Elsworth, Y; Garcia, R A; Handberg, R; Metcalfe, T S; Quirion, P -O; Regulo, C; Roxburgh, I W; Stello, D; Christensen-Dalsgaard, J; Kawaler, S D; Kjeldsen, H; Morris, R L; Quintana, E V; Sanderfer, D T

    2012-01-01

    We present a detailed spectroscopic study of 93 solar-type stars that are targets of the NASA/Kepler mission and provide detailed chemical composition of each target. We find that the overall metallicity is well-represented by Fe lines. Relative abundances of light elements (CNO) and alpha-elements are generally higher for low-metallicity stars. Our spectroscopic analysis benefits from the accurately measured surface gravity from the asteroseismic analysis of the Kepler light curves. The log g parameter is known to better than 0.03 dex and is held fixed in the analysis. We compare our Teff determination with a recent colour calibration of V-K (TYCHO V magnitude minus 2MASS Ks magnitude) and find very good agreement and a scatter of only 80 K, showing that for other nearby Kepler targets this index can be used. The asteroseismic log g values agree very well with the classical determination using Fe1-Fe2 balance, although we find a small systematic offset of 0.08 dex (asteroseismic log g values are lower). The ...

  9. A New Path Generation Algorithm Based on Accurate NURBS Curves

    Directory of Open Access Journals (Sweden)

    Sawssen Jalel

    2016-04-01

    Full Text Available The process of finding an optimum, smooth and feasible global path for mobile robot navigation usually involves determining the shortest polyline path, which will be subsequently smoothed to satisfy the requirements. Within this context, this paper deals with a novel roadmap algorithm for generating an optimal path in terms of Non-Uniform Rational B-Splines (NURBS curves. The generated path is well constrained within the curvature limit by exploiting the influence of the weight parameter of NURBS and/or the control points’ locations. The novelty of this paper lies in the fact that NURBS curves are not used only as a means of smoothing, but they are also involved in meeting the system’s constraints via a suitable parameterization of the weights and locations of control points. The accurate parameterization of weights allows for a greater benefit to be derived from the influence and geometrical effect of this factor, which has not been well investigated in previous works. The effectiveness of the proposed algorithm is demonstrated through extensive MATLAB computer simulations.

  10. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  11. Accurate ab initio vibrational energies of methyl chloride

    Energy Technology Data Exchange (ETDEWEB)

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  12. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    Energy Technology Data Exchange (ETDEWEB)

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  13. Accurate measurement of RF exposure from emerging wireless communication systems

    Science.gov (United States)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  14. Accurate estimators of correlation functions in Fourier space

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  15. Accurate quantification of cells recovered by bronchoalveolar lavage.

    Science.gov (United States)

    Saltini, C; Hance, A J; Ferrans, V J; Basset, F; Bitterman, P B; Crystal, R G

    1984-10-01

    Quantification of the differential cell count and total number of cells recovered from the lower respiratory tract by bronchoalveolar lavage is a valuable technique for evaluating the alveolitis of patients with inflammatory disorders of the lower respiratory tract. The most commonly used technique for the evaluation of cells recovered by lavage has been to concentrate cells by centrifugation and then to determine total cell number using a hemocytometer and differential cell count from a Wright-Glemsa-stained cytocentrifuge preparation. However, we have noted that the percentage of small cells present in the original cell suspension recovered by lavage is greater than the percentage of lymphocytes identified on cytocentrifuge preparations. Therefore, we developed procedures for determining differential cell counts on lavage cells collected on Millipore filters and stained with hematoxylin-eosin (filter preparations) and compared the results of differential cell counts performed on filter preparations with those obtained using cytocentrifuge preparations. When cells recovered by lavage were collected on filter preparations, accurate differential cell counts were obtained, as confirmed by performing differential cell counts on cell mixtures of known composition, and by comparing differential cell counts obtained using filter preparations stained with hematoxylin-eosin with those obtained using filter preparations stained with a peroxidase cytochemical stain. The morphology of cells displayed on filter preparations was excellent, and interobserver variability in quantitating cell types recovered by lavage was less than 3%.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6385789

  16. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Baez, José Luis; Sandoval-Ramírez, Jesús; Meza-Reyes, Socorro; Montiel-Smith, Sara; Gómez-Calvario, Victor [Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico); Bernès, Sylvain, E-mail: sylvain-bernes@hotmail.com [DEP Facultad de Ciencias Químicas, UANL, Guerrero y Progreso S/N, Col. Treviño, 64570 Monterrey, NL (Mexico); Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico)

    2008-04-01

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C{sub 31}H{sub 47}NO{sub 4}, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C{sub 29}H{sub 47}NO{sub 3}. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies.

  17. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    Science.gov (United States)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  18. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2016-08-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  19. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B R

    2008-11-10

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC.We also show that CLOMP also captures limitations for OpenMP parallelization on SMT and NUMA systems. Finally, CLOMPI, our MPI extension of CLOMP, demonstrates which aspects of OpenMP interact poorly with MPI when MPI helper threads cannot run on the NIC.

  20. Accurate energy model for WSN node and its optimal design

    Institute of Scientific and Technical Information of China (English)

    Kan Baoqiang; Cai Li; Zhu Hongsong; Xu Yongjun

    2008-01-01

    With the development of CMOS and MEMS technologies, the implementation of a large number of wireless distributed micro-sensors that can be easily and rapidly deployed to form highly redundant, self-configuring, and ad hoc sensor networks. To facilitate ease of deployment, these sensors operate on battery for extended periods of time. A particular challenge in maintaining extended battery lifetime lies in achieving communications with low power. For better understanding of the design tradeoffs of wireless sensor network (WSN), a more accurate energy model for wireless sensor node is proposed, and an optimal design method of energy efficient wireless sensor node is described as well. Different from power models ever shown which assume the power cost of each component in WSN node is constant, the new one takes into account the energy dissipation of circuits in practical physical layer. It shows that there are some parameters, such as data rate, carrier frequency, bandwidth, Tsw, etc, which have a significant effect on the WSN node energy consumption per useful bit (EPUB). For a given quality specification, how energy consumption can be reduced by adjusting one or more of these parameters is shown.

  1. The Global Geodetic Infrastructure for Accurate Monitoring of Earth Systems

    Science.gov (United States)

    Weston, Neil; Blackwell, Juliana; Wang, Yan; Willis, Zdenka

    2014-05-01

    The National Geodetic Survey (NGS) and the Integrated Ocean Observing System (IOOS), two Program Offices within the National Ocean Service, NOAA, routinely collect, analyze and disseminate observations and products from several of the 17 critical systems identified by the U.S. Group on Earth Observations. Gravity, sea level monitoring, coastal zone and ecosystem management, geo-hazards and deformation monitoring and ocean surface vector winds are the primary Earth systems that have active research and operational programs in NGS and IOOS. These Earth systems collect terrestrial data but most rely heavily on satellite-based sensors for analyzing impacts and monitoring global change. One fundamental component necessary for monitoring via satellites is having a stable, global geodetic infrastructure where an accurate reference frame is essential for consistent data collection and geo-referencing. This contribution will focus primarily on system monitoring, coastal zone management and global reference frames and how the scientific contributions from NGS and IOOS continue to advance our understanding of the Earth and the Global Geodetic Observing System.

  2. How accurate are our assumptions about our students' background knowledge?

    Science.gov (United States)

    Rovick, A A; Michael, J A; Modell, H I; Bruce, D S; Horwitz, B; Adamson, T; Richardson, D R; Silverthorn, D U; Whitescarver, S A

    1999-06-01

    Teachers establish prerequisites that students must meet before they are permitted to enter their courses. It is expected that having these prerequisites will provide students with the knowledge and skills they will need to successfully learn the course content. Also, the material that the students are expected to have previously learned need not be included in a course. We wanted to determine how accurate instructors' understanding of their students background knowledge actually was. To do this, we wrote a set of multiple-choice questions that could be used to test students' knowledge of concepts deemed to be essential for learning respiratory physiology. Instructors then selected 10 of these questions to be used as a prerequisite knowledge test. The instructors also predicted the performance they expected from the students on each of the questions they had selected. The resulting tests were administered in the first week of each of seven courses. The results of this study demonstrate that instructors are poor judges of what beginning students know. Instructors tended to both underestimate and overestimate students' knowledge by large margins on individual questions. Although on the average they tended to underestimate students' factual knowledge, they overestimated the students' abilities to apply this knowledge. Hence, the validity of decisions that instructors make, predicated on the basis of their students having the prerequisite knowledge that they expect, is open to question.

  3. Downhole temperature tool accurately measures well bore profile

    International Nuclear Information System (INIS)

    This paper reports that an inexpensive temperature tool provides accurate temperatures measurements during drilling operations for better design of cement jobs, workovers, well stimulation, and well bore hydraulics. Valid temperature data during specific wellbore operations can improve initial job design, fluid testing, and slurry placement, ultimately enhancing well bore performance. This improvement applies to cement slurries, breaker activation for slurries, breaker activation for stimulation and profile control, and fluid rheological properties for all downhole operations. The temperature tool has been run standalone mounted inside drill pipe, on slick wire line and braided cable, and as a free-falltool. It has also been run piggyback on both directional surveys (slick line and free-fall) and standard logging runs. This temperature measuring system has been used extensively in field well bores to depths of 20,000 ft. The temperature tool is completely reusable in the field, ever similar to the standard directional survey tools used on may drilling rigs. The system includes a small, rugged, programmable temperature sensor, a standard body housing, various adapters for specific applications, and a personal computer (PC) interface

  4. An accurately fast algorithm of calculating reflection/transmission coefficients

    Institute of Scientific and Technical Information of China (English)

    CASTAGNA; J; P

    2008-01-01

    For the boundary between transversely isotropic media with a vertical axis of symmetry (VTI media), the interface between a liquid and a VTI medium, and the free-surface of an elastic half-space of a VTI medium, an accurately fast algorithm was presented for calculating reflection/transmission (R/T) coefficients. Specially, the case of post-critical angle incidence was considered. Although we only performed the numerical calculation for the models of the VTI media, the calculated results can be extended to the models of transversely isotropic media with a horizontal axis of rotation symmetry (HTI media). Compared to previous work, this algorithm can be used not only for the calculation of R/T coefficients of the boundary between ellipsoidally anisotropic media, but also for that between generally anisotropic media, and the speed and accuracy of this algorithm are faster and higher. According to the anisotropic parameters of some rocks given by the published literature, we performed the calculation of R/T coefficients by using this algorithm and analyzed the effect of the rock anisotropy on R/T coefficients. We used Snell’s law and the energy balance principle to perform verification for the calculated results.

  5. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Perley, R. A.; Butler, B. J., E-mail: RPerley@nrao.edu, E-mail: BButler@nrao.edu [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States)

    2013-02-15

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  6. An Accurate Flux Density Scale from 1 to 50 GHz

    CERN Document Server

    Perley, Rick A

    2012-01-01

    We develop an absolute flux density scale for cm-wavelength astronomy by combining accurate flux density ratios determined by the VLA between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by WMAP. The radio sources 3C123, 3C196, 3C286 and 3C295 are found to be varying at a level of less than ~5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC7027, NGC6542, and MWC349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, ...

  7. Accurate measurement of liquid transport through nanoscale conduits

    Science.gov (United States)

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-01-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404

  8. Accurate and efficient waveforms for compact binaries on eccentric orbits

    CERN Document Server

    Huerta, E A; McWilliams, Sean T; O'Shaughnessy, Richard; Yunes, Nicolas

    2014-01-01

    Compact binaries that emit gravitational waves in the sensitivity band of ground-based detectors can have non-negligible eccentricities just prior to merger, depending on the formation scenario. We develop a purely analytic, frequency-domain model for gravitational waves emitted by compact binaries on orbits with small eccentricity, which reduces to the quasi-circular post-Newtonian approximant TaylorF2 at zero eccentricity and to the post-circular approximation of Yunes et al. (2009) at small eccentricity. Our model uses a spectral approximation to the (post-Newtonian) Kepler problem to model the orbital phase as a function of frequency, accounting for eccentricity effects up to ${\\cal{O}}(e^8)$ at each post-Newtonian order. Our approach accurately reproduces an alternative time-domain eccentric waveform model for eccentricities $e\\in [0, 0.4]$ and binaries with total mass less than 12 solar masses. As an application, we evaluate the signal amplitude that eccentric binaries produce in different networks of e...

  9. Accurate reading with sequential presentation of single letters

    Directory of Open Access Journals (Sweden)

    Nicholas Seow Chiang Price

    2012-10-01

    Full Text Available Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically-controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 words/minute and accuracies of over 90% with the single letter reading (SLR method and naive participants achieved average reading rates over 30 wpm with >90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses.

  10. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  11. Gene expression

    International Nuclear Information System (INIS)

    We prepared probes for isolating functional pieces of the metallothionein locus. The probes enabled a variety of experiments, eventually revealing two mechanisms for metallothionein gene expression, the order of the DNA coding units at the locus, and the location of the gene site in its chromosome. Once the switch regulating metallothionein synthesis was located, it could be joined by recombinant DNA methods to other, unrelated genes, then reintroduced into cells by gene-transfer techniques. The expression of these recombinant genes could then be induced by exposing the cells to Zn2+ or Cd2+. We would thus take advantage of the clearly defined switching properties of the metallothionein gene to manipulate the expression of other, perhaps normally constitutive, genes. Already, despite an incomplete understanding of how the regulatory switch of the metallothionein locus operates, such experiments have been performed successfully

  12. On the relation between gene flow theory and genetic gain

    Directory of Open Access Journals (Sweden)

    Woolliams John A

    2000-01-01

    Full Text Available Abstract In conventional gene flow theory the rate of genetic gain is calculated as the summed products of genetic selection differential and asymptotic proportion of genes deriving from sex-age groups. Recent studies have shown that asymptotic proportions of genes predicted from conventional gene flow theory may deviate considerably from true proportions. However, the rate of genetic gain predicted from conventional gene flow theory was accurate. The current note shows that the connection between asymptotic proportions of genes and rate of genetic gain that is embodied in conventional gene flow theory is invalid, even though genetic gain may be predicted correctly from it.

  13. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service. PMID:26543718

  14. Accurate deterministic solutions for the classic Boltzmann shock profile

    Science.gov (United States)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  15. How accurate are the weather forecasts for Bierun (southern Poland)?

    Science.gov (United States)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  16. Accurate source location from P waves scattered by surface topography

    Science.gov (United States)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  17. Artificial neural network accurately predicts hepatitis B surface antigen seroclearance.

    Directory of Open Access Journals (Sweden)

    Ming-Hua Zheng

    Full Text Available BACKGROUND & AIMS: Hepatitis B surface antigen (HBsAg seroclearance and seroconversion are regarded as favorable outcomes of chronic hepatitis B (CHB. This study aimed to develop artificial neural networks (ANNs that could accurately predict HBsAg seroclearance or seroconversion on the basis of available serum variables. METHODS: Data from 203 untreated, HBeAg-negative CHB patients with spontaneous HBsAg seroclearance (63 with HBsAg seroconversion, and 203 age- and sex-matched HBeAg-negative controls were analyzed. ANNs and logistic regression models (LRMs were built and tested according to HBsAg seroclearance and seroconversion. Predictive accuracy was assessed with area under the receiver operating characteristic curve (AUROC. RESULTS: Serum quantitative HBsAg (qHBsAg and HBV DNA levels, qHBsAg and HBV DNA reduction were related to HBsAg seroclearance (P<0.001 and were used for ANN/LRM-HBsAg seroclearance building, whereas, qHBsAg reduction was not associated with ANN-HBsAg seroconversion (P = 0.197 and LRM-HBsAg seroconversion was solely based on qHBsAg (P = 0.01. For HBsAg seroclearance, AUROCs of ANN were 0.96, 0.93 and 0.95 for the training, testing and genotype B subgroups respectively. They were significantly higher than those of LRM, qHBsAg and HBV DNA (all P<0.05. Although the performance of ANN-HBsAg seroconversion (AUROC 0.757 was inferior to that for HBsAg seroclearance, it tended to be better than those of LRM, qHBsAg and HBV DNA. CONCLUSIONS: ANN identifies spontaneous HBsAg seroclearance in HBeAg-negative CHB patients with better accuracy, on the basis of easily available serum data. More useful predictors for HBsAg seroconversion are still needed to be explored in the future.

  18. Accurate Classification of RNA Structures Using Topological Fingerprints

    Science.gov (United States)

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  19. Copeptin does not accurately predict disease severity in imported malaria

    Directory of Open Access Journals (Sweden)

    van Wolfswinkel Marlies E

    2012-01-01

    Full Text Available Abstract Background Copeptin has recently been identified to be a stable surrogate marker for the unstable hormone arginine vasopressin (AVP. Copeptin has been shown to correlate with disease severity in leptospirosis and bacterial sepsis. Hyponatraemia is common in severe imported malaria and dysregulation of AVP release has been hypothesized as an underlying pathophysiological mechanism. The aim of the present study was to evaluate the performance of copeptin as a predictor of disease severity in imported malaria. Methods Copeptin was measured in stored serum samples of 204 patients with imported malaria that were admitted to our Institute for Tropical Diseases in Rotterdam in the period 1999-2010. The occurrence of WHO defined severe malaria was the primary end-point. The diagnostic performance of copeptin was compared to that of previously evaluated biomarkers C-reactive protein, procalcitonin, lactate and sodium. Results Of the 204 patients (141 Plasmodium falciparum, 63 non-falciparum infection, 25 had severe malaria. The Area Under the ROC curve of copeptin for severe disease (0.66 [95% confidence interval 0.59-0.72] was comparable to that of lactate, sodium and procalcitonin. C-reactive protein (0.84 [95% CI 0.79-0.89] had a significantly better performance as a biomarker for severe malaria than the other biomarkers. Conclusions C-reactive protein but not copeptin was found to be an accurate predictor for disease severity in imported malaria. The applicability of copeptin as a marker for severe malaria in clinical practice is limited to exclusion of severe malaria.

  20. Accurate source location from waves scattered by surface topography

    Science.gov (United States)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  1. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  2. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  3. TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING

    Directory of Open Access Journals (Sweden)

    Khadoudja Ghanem

    2013-03-01

    Full Text Available An intrinsic problem of classifiers based on machine learning (ML methods is that their learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM algorithm to develop an efficient Hidden Markov Model (HMM training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.

  4. Towards More Accurate Clutering Method by Using Dynamic Time Warping

    Directory of Open Access Journals (Sweden)

    Khadoudja Ghanem

    2013-04-01

    Full Text Available An intrinsic problem of classifiers based on machine learning (ML methods is that their learning timegrows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM algorithm to develop an efficient Hidden Markov Model (HMM training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods

  5. Accurate Sound Velocity Measurement in Ocean Near-Surface Layer

    Science.gov (United States)

    Lizarralde, D.; Xu, B. L.

    2015-12-01

    Accurate sound velocity measurement is essential in oceanography because sound is the only wave that can propagate in sea water. Due to its measuring difficulties, sound velocity is often not measured directly but instead calculated from water temperature, salinity, and depth, which are much easier to obtain. This research develops a new method to directly measure the sound velocity in the ocean's near-surface layer using multi-channel seismic (MCS) hydrophones. This system consists of a device to make a sound pulse and a long cable with hundreds of hydrophones to record the sound. The distance between the source and each receiver is the offset. The time it takes the pulse to arrive to each receiver is the travel time.The errors of measuring offset and travel time will affect the accuracy of sound velocity if we calculated with just one offset and one travel time. However, by analyzing the direct arrival signal from hundreds of receivers, the velocity can be determined as the slope of a straight line in the travel time-offset graph. The errors in distance and time measurement result in only an up or down shift of the line and do not affect the slope. This research uses MCS data of survey MGL1408 obtained from the Marine Geoscience Data System and processed with Seismic Unix. The sound velocity can be directly measured to an accuracy of less than 1m/s. The included graph shows the directly measured velocity verses the calculated velocity along 100km across the Mid-Atlantic continental margin. The directly measured velocity shows a good coherence to the velocity computed from temperature and salinity. In addition, the fine variations in the sound velocity can be observed, which is hardly seen from the calculated velocity. Using this methodology, both large area acquisition and fine resolution can be achieved. This directly measured sound velocity will be a new and powerful tool in oceanography.

  6. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    Science.gov (United States)

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  7. Evidence based selection of housekeeping genes.

    Directory of Open Access Journals (Sweden)

    Hendrik J M de Jonge

    Full Text Available For accurate and reliable gene expression analysis, normalization of gene expression data against housekeeping genes (reference or internal control genes is required. It is known that commonly used housekeeping genes (e.g. ACTB, GAPDH, HPRT1, and B2M vary considerably under different experimental conditions and therefore their use for normalization is limited. We performed a meta-analysis of 13,629 human gene array samples in order to identify the most stable expressed genes. Here we show novel candidate housekeeping genes (e.g. RPS13, RPL27, RPS20 and OAZ1 with enhanced stability among a multitude of different cell types and varying experimental conditions. None of the commonly used housekeeping genes were present in the top 50 of the most stable expressed genes. In addition, using 2,543 diverse mouse gene array samples we were able to confirm the enhanced stability of the candidate novel housekeeping genes in another mammalian species. Therefore, the identified novel candidate housekeeping genes seem to be the most appropriate choice for normalizing gene expression data.

  8. Ultrasound mediated gene transfection

    Science.gov (United States)

    Williamson, Rene G.; Apfel, Robert E.; Brandsma, Janet L.

    2002-05-01

    Gene therapy is a promising modality for the treatment of a variety of human diseases both inherited and acquired, such as cystic fibrosis and cancer. The lack of an effective, safe method for the delivery of foreign genes into the cells, a process known as transfection, limits this effort. Ultrasound mediated gene transfection is an attractive method for gene delivery since it is a noninvasive technique, does not introduce any viral particles into the host and can offer very good temporal and spatial control. Previous investigators have shown that sonication increases transfection efficiency with and without ultrasound contrast agents. The mechanism is believed to be via a cavitation process where collapsing bubble nuclei permeabilize the cell membrane leading to increased DNA transfer. The research is focused on the use of pulsed wave high frequency focused ultrasound to transfect DNA into mammalian cells in vitro and in vivo. A better understanding of the mechanism behind the transfection process is also sought. A summary of some in vitro results to date will be presented, which includes the design of a sonication chamber that allows us to model the in vivo case more accurately.

  9. A genetic ensemble approach for gene-gene interaction identification

    Directory of Open Access Journals (Sweden)

    Ho Joshua WK

    2010-10-01

    Full Text Available Abstract Background It has now become clear that gene-gene interactions and gene-environment interactions are ubiquitous and fundamental mechanisms for the development of complex diseases. Though a considerable effort has been put into developing statistical models and algorithmic strategies for identifying such interactions, the accurate identification of those genetic interactions has been proven to be very challenging. Methods In this paper, we propose a new approach for identifying such gene-gene and gene-environment interactions underlying complex diseases. This is a hybrid algorithm and it combines genetic algorithm (GA and an ensemble of classifiers (called genetic ensemble. Using this approach, the original problem of SNP interaction identification is converted into a data mining problem of combinatorial feature selection. By collecting various single nucleotide polymorphisms (SNP subsets as well as environmental factors generated in multiple GA runs, patterns of gene-gene and gene-environment interactions can be extracted using a simple combinatorial ranking method. Also considered in this study is the idea of combining identification results obtained from multiple algorithms. A novel formula based on pairwise double fault is designed to quantify the degree of complementarity. Conclusions Our simulation study demonstrates that the proposed genetic ensemble algorithm has comparable identification power to Multifactor Dimensionality Reduction (MDR and is slightly better than Polymorphism Interaction Analysis (PIA, which are the two most popular methods for gene-gene interaction identification. More importantly, the identification results generated by using our genetic ensemble algorithm are highly complementary to those obtained by PIA and MDR. Experimental results from our simulation studies and real world data application also confirm the effectiveness of the proposed genetic ensemble algorithm, as well as the potential benefits of

  10. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    Science.gov (United States)

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however

  11. Fast and accurate predictions of covalent bonds in chemical space

    Science.gov (United States)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  12. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    Science.gov (United States)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  13. Fast and accurate predictions of covalent bonds in chemical space.

    Science.gov (United States)

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  14. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    Science.gov (United States)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  15. Fast and accurate line scanner based on white light interferometry

    Science.gov (United States)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  16. An accurate and simple quantum model for liquid water.

    Science.gov (United States)

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  17. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    Science.gov (United States)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  18. How utilities can achieve more accurate decommissioning cost estimates

    International Nuclear Information System (INIS)

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  19. Improved management of radiotherapy departments through accurate cost data

    International Nuclear Information System (INIS)

    Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  20. Gene therapy

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    2005147 CNHK200-hA-a gene-viral therapeutic system and its antitumor effect on lung cancer. WANG Wei-guo(王伟国),et al. Viral & Gene Ther Center, Eastern Hepatobilli Surg Instit 2nd Milit Univ, Shanghai 200438. Chin J Oncol,2005:27(2):69-72. Objective: To develop a novel vector system, which combines the advantages of the gene therapy,

  1. MRI Reporter Genes for Noninvasive Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Caixia Yang

    2016-05-01

    Full Text Available Magnetic resonance imaging (MRI is one of the most important imaging technologies used in clinical diagnosis. Reporter genes for MRI can be applied to accurately track the delivery of cell in cell therapy, evaluate the therapy effect of gene delivery, and monitor tissue/cell-specific microenvironments. Commonly used reporter genes for MRI usually include genes encoding the enzyme (e.g., tyrosinase and β-galactosidase, the receptor on the cells (e.g., transferrin receptor, and endogenous reporter genes (e.g., ferritin reporter gene. However, low sensitivity limits the application of MRI and reporter gene-based multimodal imaging strategies are common including optical imaging and radionuclide imaging. These can significantly improve diagnostic efficiency and accelerate the development of new therapies.

  2. Improving gene expression similarity measurement using pathway-based analytic dimension

    OpenAIRE

    2009-01-01

    Background Gene expression similarity measuring methods were developed and applied to search rapidly growing public microarray databases. However, current expression similarity measuring methods need to be improved to accurately measure similarity between gene expression profiles from different platforms or different experiments. Results We devised new gene expression similarity measuring method based on pathway information. In short, newly devised method measure similarity between gene expre...

  3. Validation of reference genes for gene expression analysis in chicory (Cichorium intybus) using quantitative real-time PCR

    OpenAIRE

    Van Bockstaele Erik; Maroufi Asad; De Loose Marc

    2010-01-01

    Abstract Background Quantitative real-time reverse transcriptase polymerase chain reaction (qRT-PCR) is a sensitive technique for quantifying gene expression levels. One or more appropriate reference genes must be selected to accurately compare mRNA transcripts across different samples and tissues. Thus far, only actin-2 has been used as a reference gene for qRT-PCR in chicory, and a full comparison of several candidate reference genes in chicory has not yet been reported. Results Seven candi...

  4. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    Energy Technology Data Exchange (ETDEWEB)

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  5. Accurate determination of plasmid copy number of flow-sorted cells using droplet digital PCR.

    Science.gov (United States)

    Jahn, Michael; Vorpahl, Carsten; Türkowsky, Dominique; Lindmeyer, Martin; Bühler, Bruno; Harms, Hauke; Müller, Susann

    2014-06-17

    Many biotechnological processes rely on the expression of a plasmid-based target gene. A constant and sufficient number of plasmids per cell is desired for efficient protein production. To date, only a few methods for the determination of plasmid copy number (PCN) are available, and most of them average the PCN of total populations disregarding heterogeneous distributions. Here, we utilize the highly precise quantification of DNA molecules by droplet digital PCR (ddPCR) and combine it with cell sorting using flow cytometry. A duplex PCR assay was set up requiring only 1000 sorted cells for precise determination of PCN. The robustness of this method was proven by thorough optimization of cell sorting, cell disruption, and PCR conditions. When non plasmid-harboring cells of Pseudomonas putida KT2440 were spiked with different dilutions of the expression plasmid pA-EGFP_B, a PCN from 1 to 64 could be accurately detected. As a proof of principle, induced cultures of P. putida KT2440 producing an EGFP-fused model protein by means of the plasmid pA-EGFP_B were investigated by flow cytometry and showed two distinct subpopulations, fluorescent and nonfluorescent cells. These two subpopulations were sorted for PCN determination with ddPCR. A remarkably diverging plasmid distribution was found within the population, with nonfluorescent cells showing a much lower PCN (≤1) than fluorescent cells (PCN of up to 5) under standard conditions.

  6. Using Ontology Fingerprints to disambiguate gene name entities in the biomedical literature

    OpenAIRE

    Chen, Guocai; Zhao, Jieyi; Cohen, Trevor; Tao, Cui; Sun, Jingchun; Xu, Hua; Bernstam, Elmer V.; Lawson, Andrew; Zeng, Jia; Johnson, Amber M.; Holla, Vijaykumar; Bailey, Ann M.; Lara-Guerra, Humberto; Litzenburger, Beate; Meric-Bernstam, Funda

    2015-01-01

    Ambiguous gene names in the biomedical literature are a barrier to accurate information extraction. To overcome this hurdle, we generated Ontology Fingerprints for selected genes that are relevant for personalized cancer therapy. These Ontology Fingerprints were used to evaluate the association between genes and biomedical literature to disambiguate gene names. We obtained 93.6% precision for the test gene set and 80.4% for the area under a receiver-operating characteristics curve for gene an...

  7. Trichoderma genes

    Science.gov (United States)

    Foreman, Pamela; Goedegebuur, Frits; Van Solingen, Pieter; Ward, Michael

    2012-06-19

    Described herein are novel gene sequences isolated from Trichoderma reesei. Two genes encoding proteins comprising a cellulose binding domain, one encoding an arabionfuranosidase and one encoding an acetylxylanesterase are described. The sequences, CIP1 and CIP2, contain a cellulose binding domain. These proteins are especially useful in the textile and detergent industry and in pulp and paper industry.

  8. Gene therapy.

    OpenAIRE

    Mota Biosca, Anna

    1992-01-01

    Applications of gene therapy have been evaluated in virtually every oral tissue, and many of these have proved successful at least in animal models. While gene therapy will not be used routinely in the next decade, practitioners of oral medicine should be aware of the potential of this novel type of treatment that doubtless will benefit many patients with oral diseases.

  9. Efficient exploration of the space of reconciled gene trees.

    Science.gov (United States)

    Szöllõsi, Gergely J; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent

    2013-11-01

    Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree-species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree-species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source

  10. An Accurate Method for Inferring Relatedness in Large Datasets of Unphased Genotypes via an Embedded Likelihood-Ratio Test

    KAUST Repository

    Rodriguez, Jesse M.

    2013-01-01

    Studies that map disease genes rely on accurate annotations that indicate whether individuals in the studied cohorts are related to each other or not. For example, in genome-wide association studies, the cohort members are assumed to be unrelated to one another. Investigators can correct for individuals in a cohort with previously-unknown shared familial descent by detecting genomic segments that are shared between them, which are considered to be identical by descent (IBD). Alternatively, elevated frequencies of IBD segments near a particular locus among affected individuals can be indicative of a disease-associated gene. As genotyping studies grow to use increasingly large sample sizes and meta-analyses begin to include many data sets, accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting related pairs of individuals and shared haplotypic segments within these pairs. PARENTE is a computationally-efficient method based on an embedded likelihood ratio test. As demonstrated by the results of our simulations, our method exhibits better accuracy than the current state of the art, and can be used for the analysis of large genotyped cohorts. PARENTE\\'s higher accuracy becomes even more significant in more challenging scenarios, such as detecting shorter IBD segments or when an extremely low false-positive rate is required. PARENTE is publicly and freely available at http://parente.stanford.edu/. © 2013 Springer-Verlag.

  11. The gene tree delusion.

    Science.gov (United States)

    Springer, Mark S; Gatesy, John

    2016-01-01

    Higher-level relationships among placental mammals are mostly resolved, but several polytomies remain contentious. Song et al. (2012) claimed to have resolved three of these using shortcut coalescence methods (MP-EST, STAR) and further concluded that these methods, which assume no within-locus recombination, are required to unravel deep-level phylogenetic problems that have stymied concatenation. Here, we reanalyze Song et al.'s (2012) data and leverage these re-analyses to explore key issues in systematics including the recombination ratchet, gene tree stoichiometry, the proportion of gene tree incongruence that results from deep coalescence versus other factors, and simulations that compare the performance of coalescence and concatenation methods in species tree estimation. Song et al. (2012) reported an average locus length of 3.1 kb for the 447 protein-coding genes in their phylogenomic dataset, but the true mean length of these loci (start codon to stop codon) is 139.6 kb. Empirical estimates of recombination breakpoints in primates, coupled with consideration of the recombination ratchet, suggest that individual coalescence genes (c-genes) approach ∼12 bp or less for Song et al.'s (2012) dataset, three to four orders of magnitude shorter than the c-genes reported by these authors. This result has general implications for the application of coalescence methods in species tree estimation. We contend that it is illogical to apply coalescence methods to complete protein-coding sequences. Such analyses amalgamate c-genes with different evolutionary histories (i.e., exons separated by >100,000 bp), distort true gene tree stoichiometry that is required for accurate species tree inference, and contradict the central rationale for applying coalescence methods to difficult phylogenetic problems. In addition, Song et al.'s (2012) dataset of 447 genes includes 21 loci with switched taxonomic names, eight duplicated loci, 26 loci with non-homologous sequences that are

  12. The gene tree delusion.

    Science.gov (United States)

    Springer, Mark S; Gatesy, John

    2016-01-01

    Higher-level relationships among placental mammals are mostly resolved, but several polytomies remain contentious. Song et al. (2012) claimed to have resolved three of these using shortcut coalescence methods (MP-EST, STAR) and further concluded that these methods, which assume no within-locus recombination, are required to unravel deep-level phylogenetic problems that have stymied concatenation. Here, we reanalyze Song et al.'s (2012) data and leverage these re-analyses to explore key issues in systematics including the recombination ratchet, gene tree stoichiometry, the proportion of gene tree incongruence that results from deep coalescence versus other factors, and simulations that compare the performance of coalescence and concatenation methods in species tree estimation. Song et al. (2012) reported an average locus length of 3.1 kb for the 447 protein-coding genes in their phylogenomic dataset, but the true mean length of these loci (start codon to stop codon) is 139.6 kb. Empirical estimates of recombination breakpoints in primates, coupled with consideration of the recombination ratchet, suggest that individual coalescence genes (c-genes) approach ∼12 bp or less for Song et al.'s (2012) dataset, three to four orders of magnitude shorter than the c-genes reported by these authors. This result has general implications for the application of coalescence methods in species tree estimation. We contend that it is illogical to apply coalescence methods to complete protein-coding sequences. Such analyses amalgamate c-genes with different evolutionary histories (i.e., exons separated by >100,000 bp), distort true gene tree stoichiometry that is required for accurate species tree inference, and contradict the central rationale for applying coalescence methods to difficult phylogenetic problems. In addition, Song et al.'s (2012) dataset of 447 genes includes 21 loci with switched taxonomic names, eight duplicated loci, 26 loci with non-homologous sequences that are

  13. Exploring the Optimal Strategy to Predict Essential Genes in Microbes

    Directory of Open Access Journals (Sweden)

    Yao Lu

    2011-12-01

    Full Text Available Accurately predicting essential genes is important in many aspects of biology, medicine and bioengineering. In previous research, we have developed a machine learning based integrative algorithm to predict essential genes in bacterial species. This algorithm lends itself to two approaches for predicting essential genes: learning the traits from known essential genes in the target organism, or transferring essential gene annotations from a closely related model organism. However, for an understudied microbe, each approach has its potential limitations. The first is constricted by the often small number of known essential genes. The second is limited by the availability of model organisms and by evolutionary distance. In this study, we aim to determine the optimal strategy for predicting essential genes by examining four microbes with well-characterized essential genes. Our results suggest that, unless the known essential genes are few, learning from the known essential genes in the target organism usually outperforms transferring essential gene annotations from a related model organism. In fact, the required number of known essential genes is surprisingly small to make accurate predictions. In prokaryotes, when the number of known essential genes is greater than 2% of total genes, this approach already comes close to its optimal performance. In eukaryotes, achieving the same best performance requires over 4% of total genes, reflecting the increased complexity of eukaryotic organisms. Combining the two approaches resulted in an increased performance when the known essential genes are few. Our investigation thus provides key information on accurately predicting essential genes and will greatly facilitate annotations of microbial genomes.

  14. [Language gene].

    Science.gov (United States)

    Takahashi, Hiroshi

    2006-11-01

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. Recent advance in the field of molecular genetics finally discovered 'Language Gene'. Disruption of FOXP2 gene, the firstly identified 'language gene' causes severe speech and language disorder. To elucidate the anatomical basis of language processing in the brain, we examined the expression pattern of FOXP2/Foxp2 genes in the monkey and rat brains through development. We found the preferential expression of FOXP2/Foxp2 in the striosomal compartment of the developing striatum. Thus, we suggest the striatum, particularly striosomal system may participate in neural information processing for language and speech. Our suggestion is consistent with the declarative/ procedural model of language proposed by Ullman (1997, 2001), which the procedural memory-dependent mental grammar is rooted in the basal ganglia and the frontal cortex, and the declarative memory-dependent mental lexicon is rooted in the temporal lobe. PMID:17432197

  15. Biased Gene Fractionation and Dominant Gene Expression among the Subgenomes of Brassica rapa

    OpenAIRE

    Feng Cheng; Jian Wu; Lu Fang; Silong Sun; Bo Liu; Ke Lin; Guusje Bonnema; Xiaowu Wang

    2012-01-01

    Polyploidization, both ancient and recent, is frequent among plants. A ‘‘two-step theory’’ was proposed to explain the meso-triplication of the Brassica ‘‘A’’ genome: Brassica rapa. By accurately partitioning of this genome, we observed that genes in the less fractioned subgenome (LF) were dominantly expressed over the genes in more fractioned subgenomes (MFs: MF1 and MF2), while the genes in MF1 were slightly dominantly expressed over the genes in MF2. The results indicated that the dominant...

  16. Genometa--a fast and accurate classifier for short metagenomic shotgun reads.

    Directory of Open Access Journals (Sweden)

    Colin F Davenport

    Full Text Available Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.

  17. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    Directory of Open Access Journals (Sweden)

    Duncan K Ralph

    2016-01-01

    Full Text Available VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion. Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/, is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.

  18. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Marc Santolini

    Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting

  19. Identification of reference genes for gene expression studies during seed germination and seedling establishment Ricinus communis L.

    NARCIS (Netherlands)

    Ribeiro de Jesus, P.R.; Dekkers, S.J.W.; Fernandez, L.G.; Castro, De R.D.; Ligterink, W.; Hilhorst, H.W.M.

    2014-01-01

    Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) is an important technology to analyse gene expression levels during plant development or in response to different treatments. An important requirement to measure gene expression levels accurately is a properly validated set of re

  20. On accurate computations of bound state properties in three- and four-electron atomic systems

    CERN Document Server

    Frolov, Alexei M

    2016-01-01

    Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.

  1. PANTHER version 7: improved phylogenetic trees, orthologs and collaboration with the Gene Ontology Consortium

    OpenAIRE

    Mi, Huaiyu; Dong, Qing; Muruganujan, Anushya; Gaudet, Pascale; Lewis, Suzanna; Thomas, Paul D

    2009-01-01

    Protein Analysis THrough Evolutionary Relationships (PANTHER) is a comprehensive software system for inferring the functions of genes based on their evolutionary relationships. Phylogenetic trees of gene families form the basis for PANTHER and these trees are annotated with ontology terms describing the evolution of gene function from ancestral to modern day genes. One of the main applications of PANTHER is in accurate prediction of the functions of uncharacterized genes, based on their evolu...

  2. SpeedyGenes: Exploiting an Improved Gene Synthesis Method for the Efficient Production of Synthetic Protein Libraries for Directed Evolution.

    Science.gov (United States)

    Currin, Andrew; Swainston, Neil; Day, Philip J; Kell, Douglas B

    2017-01-01

    Gene synthesis is a fundamental technology underpinning much research in the life sciences. In particular, synthetic biology and biotechnology utilize gene synthesis to assemble any desired DNA sequence, which can then be incorporated into novel parts and pathways. Here, we describe SpeedyGenes, a gene synthesis method that can assemble DNA sequences with greater fidelity (fewer errors) than existing methods, but that can also be used to encode extensive, statistically designed sequence variation at any position in the sequence to create diverse (but accurate) variant libraries. We summarize the integrated use of GeneGenie to design DNA and oligonucleotide sequences, followed by the procedure for assembling these accurately and efficiently using SpeedyGenes. PMID:27671932

  3. Using In-Service and Coaching to Increase Teachers' Accurate Use of Research-Based Strategies

    Science.gov (United States)

    Kretlow, Allison G.; Cooke, Nancy L.; Wood, Charles L.

    2012-01-01

    Increasing the accurate use of research-based practices in classrooms is a critical issue. Professional development is one of the most practical ways to provide practicing teachers with training related to research-based practices. This study examined the effects of in-service plus follow-up coaching on first grade teachers' accurate delivery of…

  4. Coupling of unstructured TLM and BEM for accurate 2D electromagnetic simulation

    OpenAIRE

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2015-01-01

    In this paper the hybridisation of the 2D time-domain boundary element method (BEM) with the unstructured transmission line method (UTLM) will be introduced, which enables accurate modeling of radiating boundary conditions and plane wave excitations of uniform and non-uniform targets modelled by geometrically accurate unstructured meshes. The method is demonstrated by comparing numerical results against analytical results.

  5. Smart and Accurate State-of-Charge Indication in Portable Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, P.P.L.

    2005-01-01

    Accurate State-of-Charge (SoC) and remaining run-time indication for portable devices is important for the user-convenience and to prolong the lifetime of batteries. However, the known methods of SoC indication in portable applications are not accurate enough under all practical conditions. The meth

  6. A new method and instrument for accurately measuring interval between ultrashort pulses

    Institute of Scientific and Technical Information of China (English)

    Zhonggang Ji; Yuxin Leng; Yunpei Deng; Bin Tang; Haihe Lu; Ruxin Li; Zhizhan Xu

    2005-01-01

    @@ Using second-order autocorrelation conception, a novel method and instrument for accurately measuring interval between two linearly polarized ultrashort pulses with real time were presented. The experiment demonstrated that the measuring method and instrument were simple and accurate (the measurement error < 5 fs). During measuring, there was no moving element resulting in dynamic measurement error.

  7. Identification of transcriptional signals in Encephalitozoon cuniculi widespread among Microsporidia phylum: support for accurate structural genome annotation

    Directory of Open Access Journals (Sweden)

    Wincker Patrick

    2009-12-01

    , 5'UTRs being strongly reduced, these signals can be used to ensure the accurate prediction of translation initiation codons for microsporidian genes and to improve microsporidian genome annotation.

  8. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    CERN Document Server

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  9. LABEL: fast and accurate lineage assignment with assessment of H5N1 and H9N2 influenza A hemagglutinins.

    Directory of Open Access Journals (Sweden)

    Samuel S Shepard

    Full Text Available The evolutionary classification of influenza genes into lineages is a first step in understanding their molecular epidemiology and can inform the subsequent implementation of control measures. We introduce a novel approach called Lineage Assignment By Extended Learning (LABEL to rapidly determine cladistic information for any number of genes without the need for time-consuming sequence alignment, phylogenetic tree construction, or manual annotation. Instead, LABEL relies on hidden Markov model profiles and support vector machine training to hierarchically classify gene sequences by their similarity to pre-defined lineages. We assessed LABEL by analyzing the annotated hemagglutinin genes of highly pathogenic (H5N1 and low pathogenicity (H9N2 avian influenza A viruses. Using the WHO/FAO/OIE H5N1 evolution working group nomenclature, the LABEL pipeline quickly and accurately identified the H5 lineages of uncharacterized sequences. Moreover, we developed an updated clade nomenclature for the H9 hemagglutinin gene and show a similarly fast and reliable phylogenetic assessment with LABEL. While this study was focused on hemagglutinin sequences, LABEL could be applied to the analysis of any gene and shows great potential to guide molecular epidemiology activities, accelerate database annotation, and provide a data sorting tool for other large-scale bioinformatic studies.

  10. Reference genes for gene expression studies in wheat flag leaves grown under different farming conditions

    Directory of Open Access Journals (Sweden)

    Cordeiro Raposo Fernando

    2011-09-01

    Full Text Available Abstract Background Internal control genes with highly uniform expression throughout the experimental conditions are required for accurate gene expression analysis as no universal reference genes exists. In this study, the expression stability of 24 candidate genes from Triticum aestivum cv. Cubus flag leaves grown under organic and conventional farming systems was evaluated in two locations in order to select suitable genes that can be used for normalization of real-time quantitative reverse-transcription PCR (RT-qPCR reactions. The genes were selected among the most common used reference genes as well as genes encoding proteins involved in several metabolic pathways. Findings Individual genes displayed different expression rates across all samples assayed. Applying geNorm, a set of three potential reference genes were suitable for normalization of RT-qPCR reactions in winter wheat flag leaves cv. Cubus: TaFNRII (ferredoxin-NADP(H oxidoreductase; AJ457980.1, ACT2 (actin 2; TC234027, and rrn26 (a putative homologue to RNA 26S gene; AL827977.1. In addition of these three genes that were also top-ranked by NormFinder, two extra genes: CYP18-2 (Cyclophilin A, AY456122.1 and TaWIN1 (14-3-3 like protein, AB042193 were most consistently stably expressed. Furthermore, we showed that TaFNRII, ACT2, and CYP18-2 are suitable for gene expression normalization in other two winter wheat varieties (Tommi and Centenaire grown under three treatments (organic, conventional and no nitrogen and a different environment than the one tested with cv. Cubus. Conclusions This study provides a new set of reference genes which should improve the accuracy of gene expression analyses when using wheat flag leaves as those related to the improvement of nitrogen use efficiency for cereal production.

  11. Bacterial reference genes for gene expression studies by RT-qPCR: survey and analysis.

    Science.gov (United States)

    Rocha, Danilo J P; Santos, Carolina S; Pacheco, Luis G C

    2015-09-01

    The appropriate choice of reference genes is essential for accurate normalization of gene expression data obtained by the method of reverse transcription quantitative real-time PCR (RT-qPCR). In 2009, a guideline called the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) highlighted the importance of the selection and validation of more than one suitable reference gene for obtaining reliable RT-qPCR results. Herein, we searched the recent literature in order to identify the bacterial reference genes that have been most commonly validated in gene expression studies by RT-qPCR (in the first 5 years following publication of the MIQE guidelines). Through a combination of different search parameters with the text mining tool MedlineRanker, we identified 145 unique bacterial genes that were recently tested as candidate reference genes. Of these, 45 genes were experimentally validated and, in most of the cases, their expression stabilities were verified using the software tools geNorm and NormFinder. It is noteworthy that only 10 of these reference genes had been validated in two or more of the studies evaluated. An enrichment analysis using Gene Ontology classifications demonstrated that genes belonging to the functional categories of DNA Replication (GO: 0006260) and Transcription (GO: 0006351) rendered a proportionally higher number of validated reference genes. Three genes in the former functional class were also among the top five most stable genes identified through an analysis of gene expression data obtained from the Pathosystems Resource Integration Center. These results may provide a guideline for the initial selection of candidate reference genes for RT-qPCR studies in several different bacterial species. PMID:26149127

  12. Biased Gene Fractionation and Dominant Gene Expression among the Subgenomes of Brassica rapa

    NARCIS (Netherlands)

    Cheng, F.; Wu, J.; Fang, L.; Sun, S.; Liu, B.; Lin, K.; Bonnema, A.B.; Wang, Xiaowu

    2012-01-01

    Polyploidization, both ancient and recent, is frequent among plants. A ‘‘two-step theory’’ was proposed to explain the meso-triplication of the Brassica ‘‘A’’ genome: Brassica rapa. By accurately partitioning of this genome, we observed that genes in the less fractioned subgenome (LF) were dominantl

  13. Mathematical Models of Gene Regulation

    Science.gov (United States)

    Mackey, Michael C.

    2004-03-01

    This talk will focus on examples of mathematical models for the regulation of repressible operons (e.g. the tryptophan operon), inducible operons (e.g. the lactose operon), and the lysis/lysogeny switch in phage λ. These ``simple" gene regulatory elements can display characteristics experimentally of rapid response to perturbations and bistability, and biologically accurate mathematical models capture these aspects of the dynamics. The models, if realistic, are always nonlinear and contain significant time delays due to transcriptional and translational delays that pose substantial problems for the analysis of the possible ranges of dynamics.

  14. An atlas of gene expression and gene co-regulation in the human retina.

    Science.gov (United States)

    Pinelli, Michele; Carissimo, Annamaria; Cutillo, Luisa; Lai, Ching-Hung; Mutarelli, Margherita; Moretti, Maria Nicoletta; Singh, Marwah Veer; Karali, Marianthi; Carrella, Diego; Pizzo, Mariateresa; Russo, Francesco; Ferrari, Stefano; Ponzin, Diego; Angelini, Claudia; Banfi, Sandro; di Bernardo, Diego

    2016-07-01

    The human retina is a specialized tissue involved in light stimulus transduction. Despite its unique biology, an accurate reference transcriptome is still missing. Here, we performed gene expression analysis (RNA-seq) of 50 retinal samples from non-visually impaired post-mortem donors. We identified novel transcripts with high confidence (Observed Transcriptome (ObsT)) and quantified the expression level of known transcripts (Reference Transcriptome (RefT)). The ObsT included 77 623 transcripts (23 960 genes) covering 137 Mb (35 Mb new transcribed genome). Most of the transcripts (92%) were multi-exonic: 81% with known isoforms, 16% with new isoforms and 3% belonging to new genes. The RefT included 13 792 genes across 94 521 known transcripts. Mitochondrial genes were among the most highly expressed, accounting for about 10% of the reads. Of all the protein-coding genes in Gencode, 65% are expressed in the retina. We exploited inter-individual variability in gene expression to infer a gene co-expression network and to identify genes specifically expressed in photoreceptor cells. We experimentally validated the photoreceptors localization of three genes in human retina that had not been previously reported. RNA-seq data and the gene co-expression network are available online (http://retina.tigem.it). PMID:27235414

  15. Meta-analytic approach to the accurate prediction of secreted virulence effectors in gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Sato Yoshiharu

    2011-11-01

    using known effectors of Salmonella and obtained the accurate list of putative effectors of the organism. The level of accuracy was sufficient to yield candidates for gene-directed experimental verification. Furthermore, new features of effectors were revealed: non-optimal codon usage and instability of the N-terminal region. From these findings, a new working hypothesis is proposed regarding mechanisms controlling the translocation of virulence effectors and determining the substrate specificity encoded in the secretion system.

  16. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    Directory of Open Access Journals (Sweden)

    Pricila da Silva Cunha

    2014-01-01

    Full Text Available Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH, which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH, and/or multiplex ligation-dependent probe amplification (MLPA all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  17. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation.

    Science.gov (United States)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space

  18. Three-stage Stiffly Accurate Runge-Kutta Methods for Stiff Stochastic Differential Equations

    Institute of Scientific and Technical Information of China (English)

    WANG PENG

    2011-01-01

    In this paper we discuss diagonally implicit and semi-implicit methods based on the three-stage stiffly accurate Runge-Kutta methods for solving Stratonovich stochastic differential equations (SDEs). Two methods, a three-stage stiffly accurate semi-implicit (SASI3) method and a three-stage stiffly accurate diagonally implicit (SADI3) method, are constructed in this paper. In particular, the truncated random variable is used in the implicit method. The stability properties and numerical results show the effectiveness of these methods in the pathwise approximation of stiff SDEs.

  19. An accurate potential energy curve for helium based on ab initio calculations

    Science.gov (United States)

    Janzen, A. R.; Aziz, R. A.

    1997-07-01

    Korona, Williams, Bukowski, Jeziorski, and Szalewicz [J. Chem. Phys. 106, 1 (1997)] constructed a completely ab initio potential for He2 by fitting their calculations using infinite order symmetry adapted perturbation theory at intermediate range, existing Green's function Monte Carlo calculations at short range and accurate dispersion coefficients at long range to a modified Tang-Toennies potential form. The potential with retardation added to the dipole-dipole dispersion is found to predict accurately a large set of microscopic and macroscopic experimental data. The potential with a significantly larger well depth than other recent potentials is judged to be the most accurate characterization of the helium interaction yet proposed.

  20. Selection of reference genes in canine uterine tissues.

    Science.gov (United States)

    Du, M; Wang, X; Yue, Y W; Zhou, P Y; Yao, W; Li, X; Ding, X B; Liu, X F; Guo, H; Ma, W Z

    2016-01-01

    Real-time quantitative polymerase chain reaction (RT-qPCR) is usually employed in gene expression studies in veterinary research, including in studies on canine pyometra. Canine pyometra is a common clinical disease in bitches. When using RT-qPCR, internal standards, such as reference genes, are necessary to investigate relative gene expression by quantitative measurements of mRNA levels. The aim of this study was to evaluate the stability of reference genes and select reference genes suitable for canine pyometra studies. We collected 24 bitch uterine tissue samples, including five healthy and 19 pyometra infected samples. These were used to screen the best reference genes of seven candidate genes (18SrRNA, ACTB, B2M, GAPDH, HPRT, RPL13A, and YWHAZ). The method of KH Sadek and the GeNorm, Normfinder, BestKeeper, and RefFinder software were used to evaluate the stability of gene expression in both pyometra and healthy uterine samples. The results showed that the expression stability of the candidate gene in pyometra and healthy tissues differed. We showed that YWHAZ was the best reference gene, which could be used as an accurate internal control gene in canine pyometra studies. To further validate this recommendation, the expression profile of a target gene insulin-like growth factor 1 receptor gene (IGF1R) was investigated. We found that the expression of IGF1R was significantly altered when different reference genes were used. All reference genes identified in the present study will enable more accurate normalization of gene expression data in both pyometra infected and healthy uterine tissues. PMID:27323194

  1. Bistable switching asymptotics for the self regulating gene

    International Nuclear Information System (INIS)

    A simple stochastic model of a self regulating gene that displays bistable switching is analyzed. While on, a gene transcribes mRNA at a constant rate. Transcription factors can bind to the DNA and affect the gene’s transcription rate. Before an mRNA is degraded, it synthesizes protein, which in turn regulates gene activity by influencing the activity of transcription factors. Protein is slowly removed from the system through degradation. Depending on how the protein regulates gene activity, the protein concentration can exhibit noise induced bistable switching. An asymptotic approximation of the mean switching rate is derived that includes the pre exponential factor, which improves upon a previously reported logarithmically accurate approximation. With the improved accuracy, a uniformly accurate approximation of the stationary probability density, describing the gene, mRNA copy number, and protein concentration is also obtained. (paper)

  2. Genes and Psoriasis

    Science.gov (United States)

    ... Diet Tips" to find out more! Email * Zipcode Genes and Psoriasis Genes hold the key to understanding ... is responsible for causing psoriatic disease. How do genes work? Genes control everything from height to eye ...

  3. Genes and Hearing Loss

    Science.gov (United States)

    ... Meeting Calendar Find an ENT Doctor Near You Genes and Hearing Loss Genes and Hearing Loss Patient ... mutation may only have dystopia canthorum. How Do Genes Work? Genes are a road map for the ...

  4. ASPic-GeneID: A Lightweight Pipeline for Gene Prediction and Alternative Isoforms Detection

    Science.gov (United States)

    Alioto, Tyler; Picardi, Ernesto; Guigó, Roderic

    2013-01-01

    New genomes are being sequenced at an increasingly rapid rate, far outpacing the rate at which manual gene annotation can be performed. Automated genome annotation is thus necessitated by this growth in genome projects; however, full-fledged annotation systems are usually home-grown and customized to a particular genome. There is thus a renewed need for accurate ab initio gene prediction methods. However, it is apparent that fully ab initio methods fall short of the required level of sensitivity and specificity for a quality annotation. Evidence in the form of expressed sequences gives the single biggest improvement in accuracy when used to inform gene predictions. Here, we present a lightweight pipeline for first-pass gene prediction on newly sequenced genomes. The two main components are ASPic, a program that derives highly accurate, albeit not necessarily complete, EST-based transcript annotations from EST alignments, and GeneID, a standard gene prediction program, which we have modified to take as evidence intron annotations. The introns output by ASPic CDS predictions is given to GeneID to constrain the exon-chaining process and produce predictions consistent with the underlying EST alignments. The pipeline was successfully tested on the entire C. elegans genome and the 44 ENCODE human pilot regions. PMID:24308000

  5. Efficient and Accurate Computational Framework for Injector Design and Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  6. Accurate and Precise Determination of Uranium by Means of Extraction Spectrophotometric

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>Uranium is an important nuclear material. Accurate determination of uranium is significant in the nuclear fuel production, accountancy, nuclear safeguards and other procedures of nuclear fuel cycle.

  7. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    OpenAIRE

    Hassan, Shirin E.

    2012-01-01

    Visually impaired subjects were just as accurate and reliable as normally sighted subjects in making street crossing decisions. When using only auditory information, normally sighted, visually impaired and blind subjects, while reliable, were inaccurate in making street crossing decisions.

  8. A More Accurate, Stable, FDTD Algorithm for Electromagnetics in Anisotropic Dielectrics

    CERN Document Server

    Werner, Gregory R; Cary, John R

    2012-01-01

    A more accurate, stable, finite-difference time-domain (FDTD) algorithm is developed for simulating Maxwell's equations with isotropic or anisotropic dielectric materials. This algorithm is in many cases more accurate than previous algorithms (G. R. Werner et. al., 2007; A. F. Oskooi et. al., 2009), and it remedies a defect that causes instability with high dielectric contrast (usually for \\epsilon{} significantly greater than 10) with either isotropic or anisotropic dielectrics. Ultimately this algorithm has first-order error (in the grid cell size) when the dielectric boundaries are sharp, due to field discontinuities at the dielectric interface. Accurate treatment of the discontinuities, in the limit of infinite wavelength, leads to an asymmetric, unstable update (C. A. Bauer et. al., 2011), but the symmetrized version of the latter is stable and more accurate than other FDTD methods. The convergence of field values supports the hypothesis that global first-order error can be achieved by second-order error...

  9. An accurate bound on tensor-to-scalar ratio and the scale of inflation

    CERN Document Server

    Choudhury, Sayantan

    2014-01-01

    In this paper we provide an accurate bound on tensor-to-scalar ratio (r) for class of models where inflation always occurs below the Planck scale, and the field displacement during inflation remains sub-Planckian.

  10. Mechanism for accurate, protein-assisted DNA annealing by Deinococcus radiodurans DdrB.

    Science.gov (United States)

    Sugiman-Marangos, Seiji N; Weiss, Yoni M; Junop, Murray S

    2016-04-19

    Accurate pairing of DNA strands is essential for repair of DNA double-strand breaks (DSBs). How cells achieve accurate annealing when large regions of single-strand DNA are unpaired has remained unclear despite many efforts focused on understanding proteins, which mediate this process. Here we report the crystal structure of a single-strand annealing protein [DdrB (DNA damage response B)] in complex with a partially annealed DNA intermediate to 2.2 Å. This structure and supporting biochemical data reveal a mechanism for accurate annealing involving DdrB-mediated proofreading of strand complementarity. DdrB promotes high-fidelity annealing by constraining specific bases from unauthorized association and only releases annealed duplex when bound strands are fully complementary. To our knowledge, this mechanism provides the first understanding for how cells achieve accurate, protein-assisted strand annealing under biological conditions that would otherwise favor misannealing.

  11. A self-interaction-free local hybrid functional: Accurate binding energies vis-\\`a-vis accurate ionization potentials from Kohn-Sham eigenvalues

    CERN Document Server

    Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...

  12. Equation of state for hard sphere fluids offering accurate virial coefficients

    CERN Document Server

    Tian, Jianxiang; Gui, Yuanxing; Mulero, A

    2016-01-01

    The asymptotic expansion method is extended by using currently available accurate values for the first ten virial coefficients for hard sphere fluids. It is then used to yield an equation of state for hard sphere fluids, which accurately represents the currently accepted values for the first sixteen virial coefficients and compressibility factor data in both the stable and the metastable regions of the phase diagram.

  13. Energy functions for protein design I: Efficient and accurate continuum electrostatics and solvation

    OpenAIRE

    Pokala, Navin; Handel, Tracy M.

    2004-01-01

    Electrostatics and solvation energies are important for defining protein stability, structural specificity, and molecular recognition. Because these energies are difficult to compute quickly and accurately, they are often ignored or modeled very crudely in computational protein design. To address this problem, we have developed a simple, fast, and accurate approximation for calculating Born radii in the context of protein design calculations. When these approximate Born radii are used with th...

  14. Effective Temperatures of Selected Main-sequence Stars with Most Accurate Parameters

    CERN Document Server

    Soydugan, F; Soydugan, E; Bilir, S; Gökçe, E Yaz; Steer, I; Tüysüz, M; Şenyüz, T; Demircan, O

    2014-01-01

    In this study, the distributions of the double-lined detached binaries (DBs) on the planes of mass-luminosity, mass radius and mass-effective temperature have been studied. We improved the classical mass-luminosity relation based on the database of DBs by Eker et al. (2004a). With accurate observational data available to us, a method for improving effective temperatures for eclipsing binaries with accurate masses and radii were suggested.

  15. Effective Temperatures of Selected Main-Sequence Stars with the Most Accurate Parameters

    Science.gov (United States)

    Soydugan, F.; Eker, Z.; Soydugan, E.; Bilir, S.; Gökçe, E. Y.; Steer, I.; Tüysüz, M.; Šenyüz, T.; Demircan, O.

    2015-07-01

    In this study we investigate the distributions of the properties of detached double-lined binaries (DBs) in the mass-luminosity, mass-radius, and mass-effective temperature diagrams. We have improved the classical mass-luminosity relation based on the database of DBs by Eker et al. (2014a). Based on the accurate observational data available to us we propose a method for improving the effective temperatures of eclipsing binaries with accurate mass and radius determinations.

  16. High-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion

    Science.gov (United States)

    Moiseev, N. Ya.

    2011-04-01

    An approach to the construction of high-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion is proposed. Godunov's theorem on monotone schemes is used to construct a new antidiffusion flux limiter in high-order accurate difference schemes as applied to linear advection equations with constant coefficients. The efficiency of the approach is demonstrated by solving linear advection equations with constant coefficients and one-dimensional gasdynamic equations.

  17. A Decision Tree of Bigrams is an Accurate Predictor of Word Sense

    OpenAIRE

    Pedersen, Ted

    2001-01-01

    This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.

  18. Rapid and Accurate Evaluation of the Quality of Commercial Organic Fertilizers Using Near Infrared Spectroscopy

    OpenAIRE

    Chang Wang; Chichao Huang; Jian Qian; Jian Xiao; Huan Li; Yongli Wen; Xinhua He; Wei Ran; Qirong Shen; Guanghui Yu

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected fro...

  19. Biased gene fractionation and dominant gene expression among the subgenomes of Brassica rapa.

    Directory of Open Access Journals (Sweden)

    Feng Cheng

    Full Text Available Polyploidization, both ancient and recent, is frequent among plants. A "two-step theory" was proposed to explain the meso-triplication of the Brassica "A" genome: Brassica rapa. By accurately partitioning of this genome, we observed that genes in the less fractioned subgenome (LF were dominantly expressed over the genes in more fractioned subgenomes (MFs: MF1 and MF2, while the genes in MF1 were slightly dominantly expressed over the genes in MF2. The results indicated that the dominantly expressed genes tended to be resistant against gene fractionation. By re-sequencing two B. rapa accessions: a vegetable turnip (VT117 and a Rapid Cycling line (L144, we found that genes in LF had less non-synonymous or frameshift mutations than genes in MFs; however mutation rates were not significantly different between MF1 and MF2. The differences in gene expression patterns and on-going gene death among the three subgenomes suggest that "two-step" genome triplication and differential subgenome methylation played important roles in the genome evolution of B. rapa.

  20. Prediction of Tumor Outcome Based on Gene Expression Data

    Institute of Scientific and Technical Information of China (English)

    Liu Juan; Hitoshi Iba

    2004-01-01

    Gene expression microarray data can be used to classify tumor types. We proposed a new procedure to classify human tumor samples based on microarray gene expressions by using a hybrid supervised learning method called MOEA+WV (Multi-Objective Evolutionary Algorithm+Weighted Voting). MOEA is used to search for a relatively few subsets of informative genes from the high-dimensional gene space, and WV is used as a classification tool. This new method has been applied to predicate the subtypes of lymphoma and outcomes of medulloblastoma. The results are relatively accurate and meaningful compared to those from other methods.

  1. Bypassing hazard of housekeeping genes: their evaluation in rat granule neurons treated with cerebrospinal fluid of multiple sclerosis subjects

    OpenAIRE

    Deepali Mathur; Gerardo Lopez-Rodas; Bonaventura Casanova; Francisco Coret-Ferrer

    2015-01-01

    Gene expression studies employing real-time PCR has become an intrinsic part of biomedical research. Appropriate normalization of target gene transcript(s) based on stably expressed housekeeping genes is crucial in individual experimental conditions to obtain accurate results. In multiple sclerosis (MS), several gene expression studies have been undertaken, however, the suitability of housekeeping genes to express stably in this disease is not yet explored. Recent research suggests that their...

  2. An accurate method for quantifying and analyzing copy number variation in porcine KIT by an oligonucleotide ligation assay

    Directory of Open Access Journals (Sweden)

    Cho In-Cheol

    2007-11-01

    Full Text Available Abstract Background Aside from single nucleotide polymorphisms, copy number variations (CNVs are the most important factors in susceptibility to genetic disorders because they affect expression levels of genes. In previous studies, pyrosequencing, mini-sequencing, real-time PCR, invader assays and other techniques have been used to detect CNVs. However, the higher the copy number in a genome, the more difficult it is to resolve the copies, so a more accurate method for measuring CNVs and assigning genotype is needed. Results PCR followed by a quantitative oligonucleotide ligation assay (qOLA was developed for quantifying CNVs. The accuracy and precision of the assay were evaluated for porcine KIT, which was selected as a model locus. Overall, the root mean squares of bias and standard deviation of qOLA were 2.09 and 0.45, respectively. These values are less than half of those in the published pyrosequencing assay for analyzing CNV in porcine KIT. Using a combined method of qOLA and another pyrosequencing for quantitative analysis of KIT copies with spliced forms, we confirmed the segregation of KIT alleles in 145 F1 animals with pedigree information and verified the correct assignment of genotypes. In a diagnostic test on 100 randomly sampled commercial pigs, there was perfect agreement between the genotypes obtained by grouping observations on a scatter plot and by clustering using the nearest centroid sorting method implemented in PROC FASTCLUS of the SAS package. In a test on 159 Large White pigs, there were only two discrepancies between genotypes assigned by the two clustering methods (98.7% agreement, confirming that the quantitative ligation assay established here makes genotyping possible through the accurate measurement of high KIT copy numbers (>4 per diploid genome. Moreover, the assay is sensitive enough for use on DNA from hair follicles, indicating that DNA from various sources could be used. Conclusion We have established a high

  3. On the accurate estimation of gap fraction during daytime with digital cover photography

    Science.gov (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  4. Validation of reference genes for gene expression studies in the emerald ash borer (Agrilus planipennis)

    Institute of Scientific and Technical Information of China (English)

    Swapna Priya Rajarapu; Praveen Mamidala; Omprakash Mittapalli

    2012-01-01

    The Emerald ash borer (EAB,Agrilus planipennis Fairmaire) an exotic invasive insect pest has killed millions of ash trees (Fraxinus spp.) across North America and threatens billions more.We validated six A.planipennis reference genes (actin,ACT; beta tubulin,β- TUB; glyceraldehyde-3-phosphate dehydrogenase,GAPDH; ribosomal protein,RPL7; translation elongation factor 1α,TEF-1α; and ubiquitin,UBQ) using geNorm,Normfinder and BestKeeper for accurate determination of target messenger RNA levels in gene expression studies.The stability of the six reference genes was evaluated in different larval tissues,developmental stages and two treatments ofA.planipennis using quantitative real-time polymerase chain reaction.Although there was no consistent ranking observed among the reference genes across the samples,the overall analysis revealed TEF-1α as the most stable reference gene.GAPDH and ACT showed least stability for all the samples studied.We conclude that TEF-1α is the most appropriate reference gene for gene expression studies inA.planipennis.Results obtained can be applicable for transcript profiling in other invasive insect pests.Further,these validated reference genes could also serve as the basis for selection of candidate reference genes in any given insect system post-validation.

  5. Validation of reference genes for quantitative gene expression studies in Volvox carteri using real-time RT-PCR.

    Science.gov (United States)

    Kianianmomeni, Arash; Hallmann, Armin

    2013-12-01

    Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR) is a sensitive technique for analysis of gene expression under a wide diversity of biological conditions. However, the identification of suitable reference genes is a critical factor for analysis of gene expression data. To determine potential reference genes for normalization of qRT-PCR data in the green alga Volvox carteri, the transcript levels of ten candidate reference genes were measured by qRT-PCR in three experimental sample pools containing different developmental stages, cell types and stress treatments. The expression stability of the candidate reference genes was then calculated using the algorithms geNorm, NormFinder and BestKeeper. The genes for 18S ribosomal RNA (18S) and eukaryotic translation elongation factor 1α2 (eef1) turned out to have the most stable expression levels among the samples both from different developmental stages and different stress treatments. The genes for the ribosomal protein L23 (rpl23) and the TATA-box binding protein (tbpA) showed equivalent transcript levels in the comparison of different cell types, and therefore, can be used as reference genes for cell-type specific gene expression analysis. Our results indicate that more than one reference gene is required for accurate normalization of qRT-PCRs in V. carteri. The reference genes in our study show a much better performance than the housekeeping genes used as a reference in previous studies.

  6. Superior cross-species reference genes: a blueberry case study.

    Science.gov (United States)

    Die, Jose V; Rowland, Lisa J

    2013-01-01

    The advent of affordable Next Generation Sequencing technologies has had major impact on studies of many crop species, where access to genomic technologies and genome-scale data sets has been extremely limited until now. The recent development of genomic resources in blueberry will enable the application of high throughput gene expression approaches that should relatively quickly increase our understanding of blueberry physiology. These studies, however, require a highly accurate and robust workflow and make necessary the identification of reference genes with high expression stability for correct target gene normalization. To create a set of superior reference genes for blueberry expression analyses, we mined a publicly available transcriptome data set from blueberry for orthologs to a set of Arabidopsis genes that showed the most stable expression in a developmental series. In total, the expression stability of 13 putative reference genes was evaluated by qPCR and a set of new references with high stability values across a developmental series in fruits and floral buds of blueberry were identified. We also demonstrated the need to use at least two, preferably three, reference genes to avoid inconsistencies in results, even when superior reference genes are used. The new references identified here provide a valuable resource for accurate normalization of gene expression in Vaccinium spp. and may be useful for other members of the Ericaceae family as well.

  7. Systems biology-guided identification of synthetic lethal gene pairs and its potential use to discover antibiotic combinations

    DEFF Research Database (Denmark)

    Aziz, Ramy K.; Monk, Jonathan M.; Lewis, R. M.;

    2015-01-01

    and phenotype, but their ability to accurately simulate gene-gene interactions has not been investigated extensively. Here we assess how accurately a metabolic model for Escherichia coli computes one particular type of gene-gene interaction, synthetic lethality, and find that the accuracy rate is between 25......% and 43%. The most common failure modes were incorrect computation of single gene essentiality and biological information that was missing from the model. Moreover, we performed virtual and biological screening against several synthetic lethal pairs to explore whether two-compound formulations could...

  8. Identification and Validation of Reference Genes and Their Impact on Normalized Gene Expression Studies across Cultivated and Wild Cicer Species.

    Science.gov (United States)

    Reddy, Dumbala Srinivas; Bhatnagar-Mathur, Pooja; Reddy, Palakolanu Sudhakar; Sri Cindhuri, Katamreddy; Sivaji Ganesh, Adusumalli; Sharma, Kiran Kumar

    2016-01-01

    Quantitative Real-Time PCR (qPCR) is a preferred and reliable method for accurate quantification of gene expression to understand precise gene functions. A total of 25 candidate reference genes including traditional and new generation reference genes were selected and evaluated in a diverse set of chickpea samples. The samples used in this study included nine chickpea genotypes (Cicer spp.) comprising of cultivated and wild species, six abiotic stress treatments (drought, salinity, high vapor pressure deficit, abscisic acid, cold and heat shock), and five diverse tissues (leaf, root, flower, seedlings and seed). The geNorm, NormFinder and RefFinder algorithms used to identify stably expressed genes in four sample sets revealed stable expression of UCP and G6PD genes across genotypes, while TIP41 and CAC were highly stable under abiotic stress conditions. While PP2A and ABCT genes were ranked as best for different tissues, ABCT, UCP and CAC were most stable across all samples. This study demonstrated the usefulness of new generation reference genes for more accurate qPCR based gene expression quantification in cultivated as well as wild chickpea species. Validation of the best reference genes was carried out by studying their impact on normalization of aquaporin genes PIP1;4 and TIP3;1, in three contrasting chickpea genotypes under high vapor pressure deficit (VPD) treatment. The chickpea TIP3;1 gene got significantly up regulated under high VPD conditions with higher relative expression in the drought susceptible genotype, confirming the suitability of the selected reference genes for expression analysis. This is the first comprehensive study on the stability of the new generation reference genes for qPCR studies in chickpea across species, different tissues and abiotic stresses. PMID:26863232

  9. Maximizing biomarker discovery by minimizing gene signatures

    Directory of Open Access Journals (Sweden)

    Chang Chang

    2011-12-01

    Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the

  10. An Accurate Arabic Root-Based Lemmatizer for Information Retrieval Purposes

    CERN Document Server

    El-Shishtawy, Tarek

    2012-01-01

    In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma level analysis and generation does not yet focused in Arabic NLP literatures. In the current research, we propose the first non-statistical accurate Arabic lemmatizer algorithm that is suitable for information retrieval (IR) systems. The proposed lemmatizer makes use of different Arabic language knowledge resources to generate accurate lemma form and its relevant features that support IR purposes. As a POS tagger, the experimental results show that, the proposed algorithm achieves a maximum accuracy of 94.8%. For first seen documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date Stanford accurate Arabic model, for the same, dataset.

  11. An Accurate Arabic Root-Based Lemmatizer for Information Retrieval Purposes

    Directory of Open Access Journals (Sweden)

    Tarek El-Shishtawy

    2012-01-01

    Full Text Available In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma level analysis and generation does not yet focused in Arabic NLP literatures. In the current research, we propose the first non-statistical accurate Arabic lemmatizer algorithm that is suitable for information retrieval (IR systems. The proposed lemmatizer makes use of different Arabic language knowledge resources to generate accurate lemma form and its relevant features that support IR purposes. As a POS tagger, the experimental results show that, the proposed algorithm achieves a maximum accuracy of 94.8%. For first seen documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date Stanford accurate Arabic model, for the same, dataset.

  12. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  13. Asymptotic expansion based equation of state for hard-disk fluids offering accurate virial coefficients

    CERN Document Server

    Tian, Jianxiang; Mulero, A

    2016-01-01

    Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...

  14. Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Sidan Du

    2013-08-01

    Full Text Available Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods.

  15. Second-order accurate finite volume method for well-driven flows

    CERN Document Server

    Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan

    2013-01-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  16. Accurate, robust and reliable calculations of Poisson-Boltzmann solvation energies

    CERN Document Server

    Wang, Bao

    2016-01-01

    Developing accurate solvers for the Poisson Boltzmann (PB) model is the first step to make the PB model suitable for implicit solvent simulation. Reducing the grid size influence on the performance of the solver benefits to increasing the speed of solver and providing accurate electrostatics analysis for solvated molecules. In this work, we explore the accurate coarse grid PB solver based on the Green's function treatment of the singular charges, matched interface and boundary (MIB) method for treating the geometric singularities, and posterior electrostatic potential field extension for calculating the reaction field energy. We made our previous PB software, MIBPB, robust and provides almost grid size independent reaction field energy calculation. Large amount of the numerical tests verify the grid size independence merit of the MIBPB software. The advantage of MIBPB software directly make the acceleration of the PB solver from the numerical algorithm instead of utilization of advanced computer architectures...

  17. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Directory of Open Access Journals (Sweden)

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  18. Accurate compressed look up table method for CGH in 3D holographic display.

    Science.gov (United States)

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  19. A Study on SVM Based on the Weighted Elitist Teaching-Learning-Based Optimization and Application in the Fault Diagnosis of Chemical Process

    Directory of Open Access Journals (Sweden)

    Cao Junxiang

    2015-01-01

    Full Text Available Teaching-Learning-Based Optimization (TLBO is a new swarm intelligence optimization algorithm that simulates the class learning process. According to such problems of the traditional TLBO as low optimizing efficiency and poor stability, this paper proposes an improved TLBO algorithm mainly by introducing the elite thought in TLBO and adopting different inertia weight decreasing strategies for elite and ordinary individuals of the teacher stage and the student stage. In this paper, the validity of the improved TLBO is verified by the optimizations of several typical test functions and the SVM optimized by the weighted elitist TLBO is used in the diagnosis and classification of common failure data of the TE chemical process. Compared with the SVM combining other traditional optimizing methods, the SVM optimized by the weighted elitist TLBO has a certain improvement in the accuracy of fault diagnosis and classification.

  20. Parameters Optimization Research of SVM Based on Improvement FOA%基于改进FOA的SVM参数优化研究

    Institute of Scientific and Technical Information of China (English)

    张前图; 曾真真; 毛凯; 冯明峰; 宋振宇

    2016-01-01

    为了提高支持向量机(SVM)分类性能,同时针对果蝇优化算法(FOA)寻优精度不高和易陷入局部最优的特点,提出了一种改进的FOA算法(LFOA),并将其应用于SVM的参数寻优中。该方法在运算个过程中根据果蝇种群的进化程度,动态的将种群分为较差子群和较优子群;较差子群在最优个体的指导下以基本FOA算法进行全局搜索,较优子群则围绕最优个体做Levy飞行,进行精细化局部搜索;两个子群的信息通过全局最优个体的更新和种群个体的重组进行交换。通过对UCI数据库中几个经典数据集的分类测试结果表明,基于LFOA优化SVM参数能够提高SVM的分类性能,效果优于其他几种方法。%In order to overcome the problems of support vector machine (SVM) parameters selection and the demerits of fruit fly optimization algorithm,such as low convergence precision and easily relapsing into local optimum,an improvement FOA (LFOA) is presented.Firstly,the fruit fly group is dynamically divided into advanced subgroup and drawback subgroup according to its own evolutionary level.Secondly,a global search with FOA is made for drawback subgroup with the guidance of the best individual and a finely local search is made for advanced subgroup that do Levy flight around the best individual.Finally,two subgroups exchange information by updating the overall optimum and recombining the subgroups.The classify experiment results of several data set in UCI data base show that SVM parameters optimization based on LFOA can improvement the classify performance of SVM and is better than some other method.