WorldWideScience

Sample records for novo computational prediction

  1. de novo computational enzyme design.

    Science.gov (United States)

    Zanghellini, Alexandre

    2014-10-01

    Recent advances in systems and synthetic biology as well as metabolic engineering are poised to transform industrial biotechnology by allowing us to design cell factories for the sustainable production of valuable fuels and chemicals. To deliver on their promises, such cell factories, as much as their brick-and-mortar counterparts, will require appropriate catalysts, especially for classes of reactions that are not known to be catalyzed by enzymes in natural organisms. A recently developed methodology, de novo computational enzyme design can be used to create enzymes catalyzing novel reactions. Here we review the different classes of chemical reactions for which active protein catalysts have been designed as well as the results of detailed biochemical and structural characterization studies. We also discuss how combining de novo computational enzyme design with more traditional protein engineering techniques can alleviate the shortcomings of state-of-the-art computational design techniques and create novel enzymes with catalytic proficiencies on par with natural enzymes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. MRUniNovo: an efficient tool for de novo peptide sequencing utilizing the hadoop distributed computing framework.

    Science.gov (United States)

    Li, Chuang; Chen, Tao; He, Qiang; Zhu, Yunping; Li, Kenli

    2017-03-15

    Tandem mass spectrometry-based de novo peptide sequencing is a complex and time-consuming process. The current algorithms for de novo peptide sequencing cannot rapidly and thoroughly process large mass spectrometry datasets. In this paper, we propose MRUniNovo, a novel tool for parallel de novo peptide sequencing. MRUniNovo parallelizes UniNovo based on the Hadoop compute platform. Our experimental results demonstrate that MRUniNovo significantly reduces the computation time of de novo peptide sequencing without sacrificing the correctness and accuracy of the results, and thus can process very large datasets that UniNovo cannot. MRUniNovo is an open source software tool implemented in java. The source code and the parameter settings are available at http://bioinfo.hupo.org.cn/MRUniNovo/index.php. s131020002@hnu.edu.cn ; taochen1019@163.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  4. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling was performed by using I-TASSER server. The final model obtained was accessed by PROCHECK and DFIRE2, which confirmed that the final model is reliable. Until complete biochemical ...

  5. Predicting survival of de novo metastatic breast cancer in Asian women: systematic review and validation study.

    Science.gov (United States)

    Miao, Hui; Hartman, Mikael; Bhoo-Pathy, Nirmala; Lee, Soo-Chin; Taib, Nur Aishah; Tan, Ern-Yu; Chan, Patrick; Moons, Karel G M; Wong, Hoong-Seam; Goh, Jeremy; Rahim, Siti Mastura; Yip, Cheng-Har; Verkooijen, Helena M

    2014-01-01

    In Asia, up to 25% of breast cancer patients present with distant metastases at diagnosis. Given the heterogeneous survival probabilities of de novo metastatic breast cancer, individual outcome prediction is challenging. The aim of the study is to identify existing prognostic models for patients with de novo metastatic breast cancer and validate them in Asia. We performed a systematic review to identify prediction models for metastatic breast cancer. Models were validated in 642 women with de novo metastatic breast cancer registered between 2000 and 2010 in the Singapore Malaysia Hospital Based Breast Cancer Registry. Survival curves for low, intermediate and high-risk groups according to each prognostic score were compared by log-rank test and discrimination of the models was assessed by concordance statistic (C-statistic). We identified 16 prediction models, seven of which were for patients with brain metastases only. Performance status, estrogen receptor status, metastatic site(s) and disease-free interval were the most common predictors. We were able to validate nine prediction models. The capacity of the models to discriminate between poor and good survivors varied from poor to fair with C-statistics ranging from 0.50 (95% CI, 0.48-0.53) to 0.63 (95% CI, 0.60-0.66). The discriminatory performance of existing prediction models for de novo metastatic breast cancer in Asia is modest. Development of an Asian-specific prediction model is needed to improve prognostication and guide decision making.

  6. Building a Better Fragment Library for De Novo Protein Structure Prediction

    Science.gov (United States)

    de Oliveira, Saulo H. P.; Shi, Jiye; Deane, Charlotte M.

    2015-01-01

    Fragment-based approaches are the current standard for de novo protein structure prediction. These approaches rely on accurate and reliable fragment libraries to generate good structural models. In this work, we describe a novel method for structure fragment library generation and its application in fragment-based de novo protein structure prediction. The importance of correct testing procedures in assessing the quality of fragment libraries is demonstrated. In particular, the exclusion of homologs to the target from the libraries to correctly simulate a de novo protein structure prediction scenario, something which surprisingly is not always done. We demonstrate that fragments presenting different predominant predicted secondary structures should be treated differently during the fragment library generation step and that exhaustive and random search strategies should both be used. This information was used to develop a novel method, Flib. On a validation set of 41 structurally diverse proteins, Flib libraries presents both a higher precision and coverage than two of the state-of-the-art methods, NNMake and HHFrag. Flib also achieves better precision and coverage on the set of 275 protein domains used in the two previous experiments of the the Critical Assessment of Structure Prediction (CASP9 and CASP10). We compared Flib libraries against NNMake libraries in a structure prediction context. Of the 13 cases in which a correct answer was generated, Flib models were more accurate than NNMake models for 10. “Flib is available for download at: http://www.stats.ox.ac.uk/research/proteins/resources”. PMID:25901595

  7. Building a better fragment library for de novo protein structure prediction.

    Directory of Open Access Journals (Sweden)

    Saulo H P de Oliveira

    Full Text Available Fragment-based approaches are the current standard for de novo protein structure prediction. These approaches rely on accurate and reliable fragment libraries to generate good structural models. In this work, we describe a novel method for structure fragment library generation and its application in fragment-based de novo protein structure prediction. The importance of correct testing procedures in assessing the quality of fragment libraries is demonstrated. In particular, the exclusion of homologs to the target from the libraries to correctly simulate a de novo protein structure prediction scenario, something which surprisingly is not always done. We demonstrate that fragments presenting different predominant predicted secondary structures should be treated differently during the fragment library generation step and that exhaustive and random search strategies should both be used. This information was used to develop a novel method, Flib. On a validation set of 41 structurally diverse proteins, Flib libraries presents both a higher precision and coverage than two of the state-of-the-art methods, NNMake and HHFrag. Flib also achieves better precision and coverage on the set of 275 protein domains used in the two previous experiments of the the Critical Assessment of Structure Prediction (CASP9 and CASP10. We compared Flib libraries against NNMake libraries in a structure prediction context. Of the 13 cases in which a correct answer was generated, Flib models were more accurate than NNMake models for 10. "Flib is available for download at: http://www.stats.ox.ac.uk/research/proteins/resources".

  8. Predicting survival of de novo metastatic breast cancer in Asian women: systematic review and validation study.

    Directory of Open Access Journals (Sweden)

    Hui Miao

    Full Text Available BACKGROUND: In Asia, up to 25% of breast cancer patients present with distant metastases at diagnosis. Given the heterogeneous survival probabilities of de novo metastatic breast cancer, individual outcome prediction is challenging. The aim of the study is to identify existing prognostic models for patients with de novo metastatic breast cancer and validate them in Asia. MATERIALS AND METHODS: We performed a systematic review to identify prediction models for metastatic breast cancer. Models were validated in 642 women with de novo metastatic breast cancer registered between 2000 and 2010 in the Singapore Malaysia Hospital Based Breast Cancer Registry. Survival curves for low, intermediate and high-risk groups according to each prognostic score were compared by log-rank test and discrimination of the models was assessed by concordance statistic (C-statistic. RESULTS: We identified 16 prediction models, seven of which were for patients with brain metastases only. Performance status, estrogen receptor status, metastatic site(s and disease-free interval were the most common predictors. We were able to validate nine prediction models. The capacity of the models to discriminate between poor and good survivors varied from poor to fair with C-statistics ranging from 0.50 (95% CI, 0.48-0.53 to 0.63 (95% CI, 0.60-0.66. CONCLUSION: The discriminatory performance of existing prediction models for de novo metastatic breast cancer in Asia is modest. Development of an Asian-specific prediction model is needed to improve prognostication and guide decision making.

  9. Predicting biological system objectives de novo from internal state measurements

    Directory of Open Access Journals (Sweden)

    Maranas Costas D

    2008-01-01

    Full Text Available Abstract Background Optimization theory has been applied to complex biological systems to interrogate network properties and develop and refine metabolic engineering strategies. For example, methods are emerging to engineer cells to optimally produce byproducts of commercial value, such as bioethanol, as well as molecular compounds for disease therapy. Flux balance analysis (FBA is an optimization framework that aids in this interrogation by generating predictions of optimal flux distributions in cellular networks. Critical features of FBA are the definition of a biologically relevant objective function (e.g., maximizing the rate of synthesis of biomass, a unit of measurement of cellular growth and the subsequent application of linear programming (LP to identify fluxes through a reaction network. Despite the success of FBA, a central remaining challenge is the definition of a network objective with biological meaning. Results We present a novel method called Biological Objective Solution Search (BOSS for the inference of an objective function of a biological system from its underlying network stoichiometry as well as experimentally-measured state variables. Specifically, BOSS identifies a system objective by defining a putative stoichiometric "objective reaction," adding this reaction to the existing set of stoichiometric constraints arising from known interactions within a network, and maximizing the putative objective reaction via LP, all the while minimizing the difference between the resultant in silico flux distribution and available experimental (e.g., isotopomer flux data. This new approach allows for discovery of objectives with previously unknown stoichiometry, thus extending the biological relevance from earlier methods. We verify our approach on the well-characterized central metabolic network of Saccharomyces cerevisiae. Conclusion We illustrate how BOSS offers insight into the functional organization of biochemical networks

  10. Use of transient elastography to predict de novo recurrence after radiofrequency ablation for hepatocellular carcinoma.

    Science.gov (United States)

    Lee, Sang Hoon; Kim, Seung Up; Jang, Jeong Won; Bae, Si Hyun; Lee, Sanghun; Kim, Beom Kyung; Park, Jun Yong; Kim, Do Young; Ahn, Sang Hoon; Han, Kwang-Hyub

    2015-01-01

    Liver stiffness (LS) measurement using transient elastography can accurately assess the degree of liver fibrosis, which is associated with the risk of the development of hepatocellular carcinoma (HCC). We investigated whether LS values could predict HCC de novo recurrence after radiofrequency ablation (RFA). This retrospective, multicenter study analyzed 111 patients with HCC who underwent RFA and LS measurement using transient elastography between May 2005 and April 2011. All patients were followed until March 2013 to monitor for HCC recurrence. This study included 76 men and 35 women with a mean age of 62.4 years, and the mean LS value was 21.2 kPa. During the follow-up period (median 22.4 months), 47 (42.3%) patients experienced HCC de novo recurrence, and 18 (16.2%) died. Patients with recurrence had significantly more frequent liver cirrhosis, more frequent history of previous treatment for HCC, higher total bilirubin, larger spleen size, larger total tumor size, higher tumor number, higher LS values, and lower platelet counts than those without recurrence (all P13.0 kPa were at significantly greater risk for recurrence after RFA, with a hazard ratio (HR) of 3.115 (95% confidence interval [CI], 1.238-7.842, Pmeasurement is a useful predictor of HCC de novo recurrence and overall survival after RFA.

  11. De novo prediction of human chromosome structures: Epigenetic marking patterns encode genome architecture.

    Science.gov (United States)

    Di Pierro, Michele; Cheng, Ryan R; Lieberman Aiden, Erez; Wolynes, Peter G; Onuchic, José N

    2017-11-14

    Inside the cell nucleus, genomes fold into organized structures that are characteristic of cell type. Here, we show that this chromatin architecture can be predicted de novo using epigenetic data derived from chromatin immunoprecipitation-sequencing (ChIP-Seq). We exploit the idea that chromosomes encode a 1D sequence of chromatin structural types. Interactions between these chromatin types determine the 3D structural ensemble of chromosomes through a process similar to phase separation. First, a neural network is used to infer the relation between the epigenetic marks present at a locus, as assayed by ChIP-Seq, and the genomic compartment in which those loci reside, as measured by DNA-DNA proximity ligation (Hi-C). Next, types inferred from this neural network are used as an input to an energy landscape model for chromatin organization [Minimal Chromatin Model (MiChroM)] to generate an ensemble of 3D chromosome conformations at a resolution of 50 kilobases (kb). After training the model, dubbed Maximum Entropy Genomic Annotation from Biomarkers Associated to Structural Ensembles (MEGABASE), on odd-numbered chromosomes, we predict the sequences of chromatin types and the subsequent 3D conformational ensembles for the even chromosomes. We validate these structural ensembles by using ChIP-Seq tracks alone to predict Hi-C maps, as well as distances measured using 3D fluorescence in situ hybridization (FISH) experiments. Both sets of experiments support the hypothesis of phase separation being the driving process behind compartmentalization. These findings strongly suggest that epigenetic marking patterns encode sufficient information to determine the global architecture of chromosomes and that de novo structure prediction for whole genomes may be increasingly possible. Copyright © 2017 the Author(s). Published by PNAS.

  12. Computer loss experience and predictions

    Science.gov (United States)

    Parker, Donn B.

    1996-03-01

    The types of losses organizations must anticipate have become more difficult to predict because of the eclectic nature of computers and the data communications and the decrease in news media reporting of computer-related losses as they become commonplace. Total business crime is conjectured to be decreasing in frequency and increasing in loss per case as a result of increasing computer use. Computer crimes are probably increasing, however, as their share of the decreasing business crime rate grows. Ultimately all business crime will involve computers in some way, and we could see a decline of both together. The important information security measures in high-loss business crime generally concern controls over authorized people engaged in unauthorized activities. Such controls include authentication of users, analysis of detailed audit records, unannounced audits, segregation of development and production systems and duties, shielding the viewing of screens, and security awareness and motivation controls in high-value transaction areas. Computer crimes that involve highly publicized intriguing computer misuse methods, such as privacy violations, radio frequency emanations eavesdropping, and computer viruses, have been reported in waves that periodically have saturated the news media during the past 20 years. We must be able to anticipate such highly publicized crimes and reduce the impact and embarrassment they cause. On the basis of our most recent experience, I propose nine new types of computer crime to be aware of: computer larceny (theft and burglary of small computers), automated hacking (use of computer programs to intrude), electronic data interchange fraud (business transaction fraud), Trojan bomb extortion and sabotage (code security inserted into others' systems that can be triggered to cause damage), LANarchy (unknown equipment in use), desktop forgery (computerized forgery and counterfeiting of documents), information anarchy (indiscriminate use of

  13. DNApi: A De Novo Adapter Prediction Algorithm for Small RNA Sequencing Data.

    Science.gov (United States)

    Tsuji, Junko; Weng, Zhiping

    2016-01-01

    With the rapid accumulation of publicly available small RNA sequencing datasets, third-party meta-analysis across many datasets is becoming increasingly powerful. Although removing the 3´ adapter is an essential step for small RNA sequencing analysis, the adapter sequence information is not always available in the metadata. The information can be also erroneous even when it is available. In this study, we developed DNApi, a lightweight Python software package that predicts the 3´ adapter sequence de novo and provides the user with cleansed small RNA sequences ready for down stream analysis. Tested on 539 publicly available small RNA libraries accompanied with 3´ adapter sequences in their metadata, DNApi shows near-perfect accuracy (98.5%) with fast runtime (~2.85 seconds per library) and efficient memory usage (~43 MB on average). In addition to 3´ adapter prediction, it is also important to classify whether the input small RNA libraries were already processed, i.e. the 3´ adapters were removed. DNApi perfectly judged that given another batch of datasets, 192 publicly available processed libraries were "ready-to-map" small RNA sequence. DNApi is compatible with Python 2 and 3, and is available at https://github.com/jnktsj/DNApi. The 731 small RNA libraries used for DNApi evaluation were from human tissues and were carefully and manually collected. This study also provides readers with the curated datasets that can be integrated into their studies.

  14. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  15. Pushing the size limit of de novo structure ensemble prediction guided by sparse SDSL-EPR restraints to 200 residues: The monomeric and homodimeric forms of BAX

    Science.gov (United States)

    Fischer, Axel W.; Bordignon, Enrica; Bleicken, Stephanie; García-Sáez, Ana J.; Jeschke, Gunnar; Meiler, Jens

    2016-01-01

    Structure determination remains a challenge for many biologically important proteins. In particular, proteins that adopt multiple conformations often evade crystallization in all biologically relevant states. Although computational de novo protein folding approaches often sample biologically relevant conformations, the selection of the most accurate model for different functional states remains a formidable challenge, in particular, for proteins with more than about 150 residues. Electron paramagnetic resonance (EPR) spectroscopy can obtain limited structural information for proteins in well-defined biological states and thereby assist in selecting biologically relevant conformations. The present study demonstrates that de novo folding methods are able to accurately sample the folds of 192-residue long soluble monomeric Bcl-2-associated X protein (BAX). The tertiary structures of the monomeric and homodimeric forms of BAX were predicted using the primary structure as well as 25 and 11 EPR distance restraints, respectively. The predicted models were subsequently compared to respective NMR/X-ray structures of BAX. EPR restraints improve the protein-size normalized root-mean-square-deviation (RMSD100) of the most accurate models with respect to the NMR/crystal structure from 5.9 Å to 3.9 Å and from 5.7 Å to 3.3 Å, respectively. Additionally, the model discrimination is improved, which is demonstrated by an improvement of the enrichment from 5% to 15% and from 13% to 21%, respectively. PMID:27129417

  16. Crius: A Novel Fragment-Based Algorithm of De Novo Substrate Prediction for Enzymes.

    Science.gov (United States)

    Yao, Zhiqiang; Jiang, Shuiqin; Zhang, Lujia; Gao, Bei; He, Xiao; Zhang, John Z H; Wei, Dongzhi

    2018-05-03

    The study of enzyme substrate specificity is vital for developing potential applications of enzymes. However, the routine experimental procedures require lot of resources in the discovery of novel substrates. This article reports an in silico structure-based algorithm called Crius, which predicts substrates for enzyme. The results of this fragment-based algorithm show good agreements between the simulated and experimental substrate specificities, using a lipase from Candida antarctica (CALB), a nitrilase from Cyanobacterium syechocystis sp. PCC6803 (Nit6803), and an aldo-keto reductase from Gluconobacter oxydans (Gox0644). This opens new prospects of developing computer algorithms that can effectively predict substrates for an enzyme. This article is protected by copyright. All rights reserved. © 2018 The Protein Society.

  17. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  18. Computational predictions of zinc oxide hollow structures

    Science.gov (United States)

    Tuoc, Vu Ngoc; Huan, Tran Doan; Thao, Nguyen Thi

    2018-03-01

    Nanoporous materials are emerging as potential candidates for a wide range of technological applications in environment, electronic, and optoelectronics, to name just a few. Within this active research area, experimental works are predominant while theoretical/computational prediction and study of these materials face some intrinsic challenges, one of them is how to predict porous structures. We propose a computationally and technically feasible approach for predicting zinc oxide structures with hollows at the nano scale. The designed zinc oxide hollow structures are studied with computations using the density functional tight binding and conventional density functional theory methods, revealing a variety of promising mechanical and electronic properties, which can potentially find future realistic applications.

  19. Genome-wide prediction models that incorporate de novo GWAS are a powerful new tool for tropical rice improvement

    Science.gov (United States)

    Spindel, J E; Begum, H; Akdemir, D; Collard, B; Redoña, E; Jannink, J-L; McCouch, S

    2016-01-01

    To address the multiple challenges to food security posed by global climate change, population growth and rising incomes, plant breeders are developing new crop varieties that can enhance both agricultural productivity and environmental sustainability. Current breeding practices, however, are unable to keep pace with demand. Genomic selection (GS) is a new technique that helps accelerate the rate of genetic gain in breeding by using whole-genome data to predict the breeding value of offspring. Here, we describe a new GS model that combines RR-BLUP with markers fit as fixed effects selected from the results of a genome-wide-association study (GWAS) on the RR-BLUP training data. We term this model GS + de novo GWAS. In a breeding population of tropical rice, GS + de novo GWAS outperformed six other models for a variety of traits and in multiple environments. On the basis of these results, we propose an extended, two-part breeding design that can be used to efficiently integrate novel variation into elite breeding populations, thus expanding genetic diversity and enhancing the potential for sustainable productivity gains. PMID:26860200

  20. Use of transient elastography to predict de novo recurrence after radiofrequency ablation for hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    Lee SH

    2015-02-01

    Full Text Available Sang Hoon Lee,1 Seung Up Kim,1–3 Jeong Won Jang,4 Si Hyun Bae,4 Sanghun Lee,1,3 Beom Kyung Kim,1–3 Jun Yong Park,1–3 Do Young Kim,1–3 Sang Hoon Ahn,1–3 Kwang–Hyub Han1–31Department of Internal Medicine, 2Institute of Gastroenterology, Yonsei University College of Medicine, 3Liver Cirrhosis Clinical Research Center, 4Department of Internal Medicine, College of Medicine, Catholic University of Korea, Seoul, KoreaBackground/purpose: Liver stiffness (LS measurement using transient elastography can accurately assess the degree of liver fibrosis, which is associated with the risk of the development of hepatocellular carcinoma (HCC. We investigated whether LS values could predict HCC de novo recurrence after radiofrequency ablation (RFA.Methods: This retrospective, multicenter study analyzed 111 patients with HCC who underwent RFA and LS measurement using transient elastography between May 2005 and April 2011. All patients were followed until March 2013 to monitor for HCC recurrence.Results: This study included 76 men and 35 women with a mean age of 62.4 years, and the mean LS value was 21.2 kPa. During the follow-up period (median 22.4 months, 47 (42.3% patients experienced HCC de novo recurrence, and 18 (16.2% died. Patients with recurrence had significantly more frequent liver cirrhosis, more frequent history of previous treatment for HCC, higher total bilirubin, larger spleen size, larger total tumor size, higher tumor number, higher LS values, and lower platelet counts than those without recurrence (all P<0.05. On multivariate analysis, together with previous anti-HCC treatment history, patients with LS values >13.0 kPa were at significantly greater risk for recurrence after RFA, with a hazard ratio (HR of 3.115 (95% confidence interval [CI], 1.238–7.842, P<0.05. Moreover, LS values independently predicted the mortality after RFA, with a HR of 9.834 (95% CI, 1.148–84.211, P<0.05, together with total bilirubin.Conclusions: Our

  1. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  2. De Novo Prediction of Stem Cell Identity using Single-Cell Transcriptome Data

    NARCIS (Netherlands)

    Grun, D.; Muraro, M.J.; Boisset, J.C.; Wiebrands, K.; Lyubimova, A.; Dharmadhikari, G.; Born, M. van den; Es, J. van; Jansen, E.; Clevers, H.; Koning, E.J. de; Oudenaarden, A. van

    2016-01-01

    Adult mitotic tissues like the intestine, skin, and blood undergo constant turnover throughout the life of an organism. Knowing the identity of the stem cell is crucial to understanding tissue homeostasis and its aberrations upon disease. Here we present a computational method for the derivation of

  3. De novo protein structure prediction by dynamic fragment assembly and conformational space annealing.

    Science.gov (United States)

    Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung

    2011-08-01

    Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.

  4. A computational model predicting disruption of blood vessel development.

    Directory of Open Access Journals (Sweden)

    Nicole Kleinstreuer

    2013-04-01

    Full Text Available Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis and remodeling (angiogenesis come from a variety of biological pathways linked to endothelial cell (EC behavior, extracellular matrix (ECM remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/ modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a

  5. The dual role of fragments in fragment-assembly methods for de novo protein structure prediction

    Science.gov (United States)

    Handl, Julia; Knowles, Joshua; Vernon, Robert; Baker, David; Lovell, Simon C.

    2013-01-01

    In fragment-assembly techniques for protein structure prediction, models of protein structure are assembled from fragments of known protein structures. This process is typically guided by a knowledge-based energy function and uses a heuristic optimization method. The fragments play two important roles in this process: they define the set of structural parameters available, and they also assume the role of the main variation operators that are used by the optimiser. Previous analysis has typically focused on the first of these roles. In particular, the relationship between local amino acid sequence and local protein structure has been studied by a range of authors. The correlation between the two has been shown to vary with the window length considered, and the results of these analyses have informed directly the choice of fragment length in state-of-the-art prediction techniques. Here, we focus on the second role of fragments and aim to determine the effect of fragment length from an optimization perspective. We use theoretical analyses to reveal how the size and structure of the search space changes as a function of insertion length. Furthermore, empirical analyses are used to explore additional ways in which the size of the fragment insertion influences the search both in a simulation model and for the fragment-assembly technique, Rosetta. PMID:22095594

  6. RNA secondary structure prediction using soft computing.

    Science.gov (United States)

    Ray, Shubhra Sankar; Pal, Sankar K

    2013-01-01

    Prediction of RNA structure is invaluable in creating new drugs and understanding genetic diseases. Several deterministic algorithms and soft computing-based techniques have been developed for more than a decade to determine the structure from a known RNA sequence. Soft computing gained importance with the need to get approximate solutions for RNA sequences by considering the issues related with kinetic effects, cotranscriptional folding, and estimation of certain energy parameters. A brief description of some of the soft computing-based techniques, developed for RNA secondary structure prediction, is presented along with their relevance. The basic concepts of RNA and its different structural elements like helix, bulge, hairpin loop, internal loop, and multiloop are described. These are followed by different methodologies, employing genetic algorithms, artificial neural networks, and fuzzy logic. The role of various metaheuristics, like simulated annealing, particle swarm optimization, ant colony optimization, and tabu search is also discussed. A relative comparison among different techniques, in predicting 12 known RNA secondary structures, is presented, as an example. Future challenging issues are then mentioned.

  7. Computational prediction of protein hot spot residues.

    Science.gov (United States)

    Morrow, John Kenneth; Zhang, Shuxing

    2012-01-01

    Most biological processes involve multiple proteins interacting with each other. It has been recently discovered that certain residues in these protein-protein interactions, which are called hot spots, contribute more significantly to binding affinity than others. Hot spot residues have unique and diverse energetic properties that make them challenging yet important targets in the modulation of protein-protein complexes. Design of therapeutic agents that interact with hot spot residues has proven to be a valid methodology in disrupting unwanted protein-protein interactions. Using biological methods to determine which residues are hot spots can be costly and time consuming. Recent advances in computational approaches to predict hot spots have incorporated a myriad of features, and have shown increasing predictive successes. Here we review the state of knowledge around protein-protein interactions, hot spots, and give an overview of multiple in silico prediction techniques of hot spot residues.

  8. Computational Prediction of Hot Spot Residues

    Science.gov (United States)

    Morrow, John Kenneth; Zhang, Shuxing

    2013-01-01

    Most biological processes involve multiple proteins interacting with each other. It has been recently discovered that certain residues in these protein-protein interactions, which are called hot spots, contribute more significantly to binding affinity than others. Hot spot residues have unique and diverse energetic properties that make them challenging yet important targets in the modulation of protein-protein complexes. Design of therapeutic agents that interact with hot spot residues has proven to be a valid methodology in disrupting unwanted protein-protein interactions. Using biological methods to determine which residues are hot spots can be costly and time consuming. Recent advances in computational approaches to predict hot spots have incorporated a myriad of features, and have shown increasing predictive successes. Here we review the state of knowledge around protein-protein interactions, hot spots, and give an overview of multiple in silico prediction techniques of hot spot residues. PMID:22316154

  9. Thermal sensation prediction by soft computing methodology.

    Science.gov (United States)

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  11. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  12. Computational thermofracture mechanics and life prediction

    International Nuclear Information System (INIS)

    Hsu Tairan

    1992-01-01

    This paper will present computational techniques used for the prediction of the thermofracture behaviour of structures subject to either monotonic or cyclic combined thermal and mechanical loadings. Two specific areas will be dealt with in the paper. (1) The Time-invariant thermofracture of leaking pipelines with non-uniform temperature fields; in this case, the induced non-uniform temperature fields near leaking cracks have shown to be significant. The severity of these temperature fields on the thermofracture behaviour of the pipeline will be demonstrated by a numerical example. (2) Thermomechanical creep fracture of structures: Recent developments, including those of the author's own work, on cyclic creep-fracture using damage theory will be presented. Long 'hold' and 'dwell' times, which occur in the actual operations of nuclear power plant components have been shown to have a significant effect on the overall creep-fracture behaviour of the material. Constitutive laws, which include most of these effects, have been incorporated into the existing TEPSAC code for the prediction of crack growth in solids under cyclic creep loadings. The effectiveness of using the damage parameters as fracture criteria, and the presence of plastic deformation in the overall results will be assessed. (orig.)

  13. Computational predictive methods for fracture and fatigue

    Science.gov (United States)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  14. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    Science.gov (United States)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  15. A comparative analysis of soft computing techniques for gene prediction.

    Science.gov (United States)

    Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand

    2013-07-01

    The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Predictive Models and Computational Toxicology (II IBAMTOX)

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  17. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest...

  18. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future beh...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  19. Computationally Efficient Prediction of Ionic Liquid Properties

    DEFF Research Database (Denmark)

    Chaban, V. V.; Prezhdo, O. V.

    2014-01-01

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit...... to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude....

  20. A large-scale evaluation of computational protein function prediction

    NARCIS (Netherlands)

    Radivojac, P.; Clark, W.T.; Oron, T.R.; Schnoes, A.M.; Wittkop, T.; Kourmpetis, Y.A.I.; Dijk, van A.D.J.; Friedberg, I.

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be

  1. Evolutionary Computation Techniques for Predicting Atmospheric Corrosion

    Directory of Open Access Journals (Sweden)

    Amine Marref

    2013-01-01

    Full Text Available Corrosion occurs in many engineering structures such as bridges, pipelines, and refineries and leads to the destruction of materials in a gradual manner and thus shortening their lifespan. It is therefore crucial to assess the structural integrity of engineering structures which are approaching or exceeding their designed lifespan in order to ensure their correct functioning, for example, carrying ability and safety. An understanding of corrosion and an ability to predict corrosion rate of a material in a particular environment plays a vital role in evaluating the residual life of the material. In this paper we investigate the use of genetic programming and genetic algorithms in the derivation of corrosion-rate expressions for steel and zinc. Genetic programming is used to automatically evolve corrosion-rate expressions while a genetic algorithm is used to evolve the parameters of an already engineered corrosion-rate expression. We show that both evolutionary techniques yield corrosion-rate expressions that have good accuracy.

  2. Computational prediction of chemical reactions: current status and outlook.

    Science.gov (United States)

    Engkvist, Ola; Norrby, Per-Ola; Selmi, Nidhal; Lam, Yu-Hong; Peng, Zhengwei; Sherer, Edward C; Amberg, Willi; Erhard, Thomas; Smyth, Lynette A

    2018-06-01

    Over the past few decades, various computational methods have become increasingly important for discovering and developing novel drugs. Computational prediction of chemical reactions is a key part of an efficient drug discovery process. In this review, we discuss important parts of this field, with a focus on utilizing reaction data to build predictive models, the existing programs for synthesis prediction, and usage of quantum mechanics and molecular mechanics (QM/MM) to explore chemical reactions. We also outline potential future developments with an emphasis on pre-competitive collaboration opportunities. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Computational methods in sequence and structure prediction

    Science.gov (United States)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  4. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  5. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    Science.gov (United States)

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  6. Analytical predictions of SGEMP response and comparisons with computer calculations

    International Nuclear Information System (INIS)

    de Plomb, E.P.

    1976-01-01

    An analytical formulation for the prediction of SGEMP surface current response is presented. Only two independent dimensionless parameters are required to predict the peak magnitude and rise time of SGEMP induced surface currents. The analysis applies to limited (high fluence) emission as well as unlimited (low fluence) emission. Cause-effect relationships for SGEMP response are treated quantitatively, and yield simple power law dependencies between several physical variables. Analytical predictions for a large matrix of SGEMP cases are compared with an array of about thirty-five computer solutions of similar SGEMP problems, which were collected from three independent research groups. The theoretical solutions generally agree with the computer solutions as well as the computer solutions agree with one another. Such comparisons typically show variations less than a ''factor of two.''

  7. Computational design and elaboration of a de novo heterotetrameric alpha-helical protein that selectively binds an emissive abiological (porphinato)zinc chromophore.

    Science.gov (United States)

    Fry, H Christopher; Lehmann, Andreas; Saven, Jeffery G; DeGrado, William F; Therien, Michael J

    2010-03-24

    The first example of a computationally de novo designed protein that binds an emissive abiological chromophore is presented, in which a sophisticated level of cofactor discrimination is pre-engineered. This heterotetrameric, C(2)-symmetric bundle, A(His):B(Thr), uniquely binds (5,15-di[(4-carboxymethyleneoxy)phenyl]porphinato)zinc [(DPP)Zn] via histidine coordination and complementary noncovalent interactions. The A(2)B(2) heterotetrameric protein reflects ligand-directed elements of both positive and negative design, including hydrogen bonds to second-shell ligands. Experimental support for the appropriate formulation of [(DPP)Zn:A(His):B(Thr)](2) is provided by UV/visible and circular dichroism spectroscopies, size exclusion chromatography, and analytical ultracentrifugation. Time-resolved transient absorption and fluorescence spectroscopic data reveal classic excited-state singlet and triplet PZn photophysics for the A(His):B(Thr):(DPP)Zn protein (k(fluorescence) = 4 x 10(8) s(-1); tau(triplet) = 5 ms). The A(2)B(2) apoprotein has immeasurably low binding affinities for related [porphinato]metal chromophores that include a (DPP)Fe(III) cofactor and the zinc metal ion hemin derivative [(PPIX)Zn], underscoring the exquisite active-site binding discrimination realized in this computationally designed protein. Importantly, elements of design in the A(His):B(Thr) protein ensure that interactions within the tetra-alpha-helical bundle are such that only the heterotetramer is stable in solution; corresponding homomeric bundles present unfavorable ligand-binding environments and thus preclude protein structural rearrangements that could lead to binding of (porphinato)iron cofactors.

  8. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. De novo sequencing of circulating miRNAs identifies novel markers predicting clinical outcome of locally advanced breast cancer

    Directory of Open Access Journals (Sweden)

    Wu Xiwei

    2012-03-01

    Full Text Available Abstract Background MicroRNAs (miRNAs have been recently detected in the circulation of cancer patients, where they are associated with clinical parameters. Discovery profiling of circulating small RNAs has not been reported in breast cancer (BC, and was carried out in this study to identify blood-based small RNA markers of BC clinical outcome. Methods The pre-treatment sera of 42 stage II-III locally advanced and inflammatory BC patients who received neoadjuvant chemotherapy (NCT followed by surgical tumor resection were analyzed for marker identification by deep sequencing all circulating small RNAs. An independent validation cohort of 26 stage II-III BC patients was used to assess the power of identified miRNA markers. Results More than 800 miRNA species were detected in the circulation, and observed patterns showed association with histopathological profiles of BC. Groups of circulating miRNAs differentially associated with ER/PR/HER2 status and inflammatory BC were identified. The relative levels of selected miRNAs measured by PCR showed consistency with their abundance determined by deep sequencing. Two circulating miRNAs, miR-375 and miR-122, exhibited strong correlations with clinical outcomes, including NCT response and relapse with metastatic disease. In the validation cohort, higher levels of circulating miR-122 specifically predicted metastatic recurrence in stage II-III BC patients. Conclusions Our study indicates that certain miRNAs can serve as potential blood-based biomarkers for NCT response, and that miR-122 prevalence in the circulation predicts BC metastasis in early-stage patients. These results may allow optimized chemotherapy treatments and preventive anti-metastasis interventions in future clinical applications.

  10. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  11. PSPP: a protein structure prediction pipeline for computing clusters.

    Directory of Open Access Journals (Sweden)

    Michael S Lee

    2009-07-01

    Full Text Available Protein structures are critical for understanding the mechanisms of biological systems and, subsequently, for drug and vaccine design. Unfortunately, protein sequence data exceed structural data by a factor of more than 200 to 1. This gap can be partially filled by using computational protein structure prediction. While structure prediction Web servers are a notable option, they often restrict the number of sequence queries and/or provide a limited set of prediction methodologies. Therefore, we present a standalone protein structure prediction software package suitable for high-throughput structural genomic applications that performs all three classes of prediction methodologies: comparative modeling, fold recognition, and ab initio. This software can be deployed on a user's own high-performance computing cluster.The pipeline consists of a Perl core that integrates more than 20 individual software packages and databases, most of which are freely available from other research laboratories. The query protein sequences are first divided into domains either by domain boundary recognition or Bayesian statistics. The structures of the individual domains are then predicted using template-based modeling or ab initio modeling. The predicted models are scored with a statistical potential and an all-atom force field. The top-scoring ab initio models are annotated by structural comparison against the Structural Classification of Proteins (SCOP fold database. Furthermore, secondary structure, solvent accessibility, transmembrane helices, and structural disorder are predicted. The results are generated in text, tab-delimited, and hypertext markup language (HTML formats. So far, the pipeline has been used to study viral and bacterial proteomes.The standalone pipeline that we introduce here, unlike protein structure prediction Web servers, allows users to devote their own computing assets to process a potentially unlimited number of queries as well as perform

  12. Computer Prediction of Air Quality in Livestock Buildings

    DEFF Research Database (Denmark)

    Svidt, Kjeld; Bjerg, Bjarne

    In modem livestock buildings the design of ventilation systems is important in order to obtain good air quality. The use of Computational Fluid Dynamics for predicting the air distribution makes it possible to include the effect of room geometry and heat sources in the design process. This paper...... presents numerical prediction of air flow in a livestock building compared with laboratory measurements. An example of the calculation of contaminant distribution is given, and the future possibilities of the method are discussed....

  13. A Computational Model Predicting Disruption of Blood Vessel Development

    Science.gov (United States)

    Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a varie...

  14. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  15. The value of computed tomography-urography in predicting the ...

    African Journals Online (AJOL)

    Background The natural course of pelviureteric junction (PUJ) obstruction is variable. Of those who require surgical intervention, there is no definite reliable preoperative predictor of the likely postoperative outcome. We evaluated the value of preoperative computed tomography (CT)-urography in predicting the ...

  16. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  17. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    Science.gov (United States)

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  18. De Novo Glutamine Synthesis

    Science.gov (United States)

    He, Qiao; Shi, Xinchong; Zhang, Linqi; Yi, Chang; Zhang, Xuezhen

    2016-01-01

    Purpose: The aim of this study was to investigate the role of de novo glutamine (Gln) synthesis in the proliferation of C6 glioma cells and its detection with 13N-ammonia. Methods: Chronic Gln-deprived C6 glioma (0.06C6) cells were established. The proliferation rates of C6 and 0.06C6 cells were measured under the conditions of Gln deprivation along with or without the addition of ammonia or glutamine synthetase (GS) inhibitor. 13N-ammonia uptake was assessed in C6 cells by gamma counting and in rats with C6 and 0.06C6 xenografts by micro–positron emission tomography (PET) scanning. The expression of GS in C6 cells and xenografts was assessed by Western blotting and immunohistochemistry, respectively. Results: The Gln-deprived C6 cells showed decreased proliferation ability but had a significant increase in GS expression. Furthermore, we found that low concentration of ammonia was sufficient to maintain the proliferation of Gln-deprived C6 cells, and 13N-ammonia uptake in C6 cells showed Gln-dependent decrease, whereas inhibition of GS markedly reduced the proliferation of C6 cells as well as the uptake of 13N-ammoina. Additionally, microPET/computed tomography exhibited that subcutaneous 0.06C6 xenografts had higher 13N-ammonia uptake and GS expression in contrast to C6 xenografts. Conclusion: De novo Gln synthesis through ammonia–glutamate reaction plays an important role in the proliferation of C6 cells. 13N-ammonia can be a potential metabolic PET tracer for Gln-dependent tumors. PMID:27118759

  19. The origins of computer weather prediction and climate modeling

    International Nuclear Information System (INIS)

    Lynch, Peter

    2008-01-01

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed

  20. Osteotomy simulation and soft tissue prediction using computer tomography scans

    International Nuclear Information System (INIS)

    Teschner, M.; Girod, S.; Girod, B.

    1999-01-01

    In this paper, a system is presented that can be used to simulate osteotomies of the skull and to estimate the resulting of tissue changes. Thus, the three-dimensional, photorealistic, postoperative appearance of a patient can be assessed. The system is based on a computer tomography scan and a photorealistic laser scan of the patient's face. In order to predict the postoperative appearance of a patient the soft tissue must follow the movement of the underlying bone. In this paper, a multi-layer soft tissue model is proposed that is based on springs. It incorporates features like skin turgor, gravity and sliding bone contact. The prediction of soft tissue changes due to bone realignments is computed using a very efficient and robust optimization method. The system can handle individual patient data sets and has been tested with several clinical cases. (author)

  1. Generative Recurrent Networks for De Novo Drug Design.

    Science.gov (United States)

    Gupta, Anvita; Müller, Alex T; Huisman, Berend J H; Fuchs, Jens A; Schneider, Petra; Schneider, Gisbert

    2018-01-01

    Generative artificial intelligence models present a fresh approach to chemogenomics and de novo drug design, as they provide researchers with the ability to narrow down their search of the chemical space and focus on regions of interest. We present a method for molecular de novo design that utilizes generative recurrent neural networks (RNN) containing long short-term memory (LSTM) cells. This computational model captured the syntax of molecular representation in terms of SMILES strings with close to perfect accuracy. The learned pattern probabilities can be used for de novo SMILES generation. This molecular design concept eliminates the need for virtual compound library enumeration. By employing transfer learning, we fine-tuned the RNN's predictions for specific molecular targets. This approach enables virtual compound design without requiring secondary or external activity prediction, which could introduce error or unwanted bias. The results obtained advocate this generative RNN-LSTM system for high-impact use cases, such as low-data drug discovery, fragment based molecular design, and hit-to-lead optimization for diverse drug targets. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  2. Why do Reservoir Computing Networks Predict Chaotic Systems so Well?

    Science.gov (United States)

    Lu, Zhixin; Pathak, Jaideep; Girvan, Michelle; Hunt, Brian; Ott, Edward

    Recently a new type of artificial neural network, which is called a reservoir computing network (RCN), has been employed to predict the evolution of chaotic dynamical systems from measured data and without a priori knowledge of the governing equations of the system. The quality of these predictions has been found to be spectacularly good. Here, we present a dynamical-system-based theory for how RCN works. Basically a RCN is thought of as consisting of three parts, a randomly chosen input layer, a randomly chosen recurrent network (the reservoir), and an output layer. The advantage of the RCN framework is that training is done only on the linear output layer, making it computationally feasible for the reservoir dimensionality to be large. In this presentation, we address the underlying dynamical mechanisms of RCN function by employing the concepts of generalized synchronization and conditional Lyapunov exponents. Using this framework, we propose conditions on reservoir dynamics necessary for good prediction performance. By looking at the RCN from this dynamical systems point of view, we gain a deeper understanding of its surprising computational power, as well as insights on how to design a RCN. Supported by Army Research Office Grant Number W911NF1210101.

  3. in silico Whole Genome Sequencer & Analyzer (iWGS): a computational pipeline to guide the design and analysis of de novo genome sequencing studies

    Science.gov (United States)

    The availability of genomes across the tree of life is highly biased toward vertebrates, pathogens, human disease models, and organisms with relatively small and simple genomes. Recent progress in genomics has enabled the de novo decoding of the genome of virtually any organism, greatly expanding it...

  4. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  5. Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules.

    Directory of Open Access Journals (Sweden)

    Konda Leela Sarath Kumar

    Full Text Available Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage.The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with 'High' reliability scoring, DEREK (accuracy = 72.73% and CCR = 71.44% and TOPKAT (accuracy = 60.00% and CCR = 61.67%. Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%, the coverage was very low (only 10 out of 77 molecules were predicted reliably.Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing.

  6. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  7. [Computational prediction of human immunodeficiency resistance to reverse transcriptase inhibitors].

    Science.gov (United States)

    Tarasova, O A; Filimonov, D A; Poroikov, V V

    2017-10-01

    Human immunodeficiency virus (HIV) causes acquired immunodeficiency syndrome (AIDS) and leads to over one million of deaths annually. Highly active antiretroviral treatment (HAART) is a gold standard in the HIV/AIDS therapy. Nucleoside and non-nucleoside inhibitors of HIV reverse transcriptase (RT) are important component of HAART, but their effect depends on the HIV susceptibility/resistance. HIV resistance mainly occurs due to mutations leading to conformational changes in the three-dimensional structure of HIV RT. The aim of our work was to develop and test a computational method for prediction of HIV resistance associated with the mutations in HIV RT. Earlier we have developed a method for prediction of HIV type 1 (HIV-1) resistance; it is based on the usage of position-specific descriptors. These descriptors are generated using the particular amino acid residue and its position; the position of certain residue is determined in a multiple alignment. The training set consisted of more than 1900 sequences of HIV RT from the Stanford HIV Drug Resistance database; for these HIV RT variants experimental data on their resistance to ten inhibitors are presented. Balanced accuracy of prediction varies from 80% to 99% depending on the method of classification (support vector machine, Naive Bayes, random forest, convolutional neural networks) and the drug, resistance to which is obtained. Maximal balanced accuracy was obtained for prediction of resistance to zidovudine, stavudine, didanosine and efavirenz by the random forest classifier. Average accuracy of prediction is 89%.

  8. Experimental and computational prediction of glass transition temperature of drugs.

    Science.gov (United States)

    Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S

    2014-12-22

    Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.

  9. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  10. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2014-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  11. Vehicular traffic noise prediction using soft computing approach.

    Science.gov (United States)

    Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek

    2016-12-01

    A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  13. Mathematical modeling and computational prediction of cancer drug resistance.

    Science.gov (United States)

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of

  14. Computational prediction of protein-protein interactions in Leishmania predicted proteomes.

    Directory of Open Access Journals (Sweden)

    Antonio M Rezende

    Full Text Available The Trypanosomatids parasites Leishmania braziliensis, Leishmania major and Leishmania infantum are important human pathogens. Despite of years of study and genome availability, effective vaccine has not been developed yet, and the chemotherapy is highly toxic. Therefore, it is clear just interdisciplinary integrated studies will have success in trying to search new targets for developing of vaccines and drugs. An essential part of this rationale is related to protein-protein interaction network (PPI study which can provide a better understanding of complex protein interactions in biological system. Thus, we modeled PPIs for Trypanosomatids through computational methods using sequence comparison against public database of protein or domain interaction for interaction prediction (Interolog Mapping and developed a dedicated combined system score to address the predictions robustness. The confidence evaluation of network prediction approach was addressed using gold standard positive and negative datasets and the AUC value obtained was 0.94. As result, 39,420, 43,531 and 45,235 interactions were predicted for L. braziliensis, L. major and L. infantum respectively. For each predicted network the top 20 proteins were ranked by MCC topological index. In addition, information related with immunological potential, degree of protein sequence conservation among orthologs and degree of identity compared to proteins of potential parasite hosts was integrated. This information integration provides a better understanding and usefulness of the predicted networks that can be valuable to select new potential biological targets for drug and vaccine development. Network modularity which is a key when one is interested in destabilizing the PPIs for drug or vaccine purposes along with multiple alignments of the predicted PPIs were performed revealing patterns associated with protein turnover. In addition, around 50% of hypothetical protein present in the networks

  15. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  16. Application of large computers for predicting the oil field production

    Energy Technology Data Exchange (ETDEWEB)

    Philipp, W; Gunkel, W; Marsal, D

    1971-10-01

    The flank injection drive plays a dominant role in the exploitation of the BEB-oil fields. Therefore, 2-phase flow computer models were built up, adapted to a predominance of a single flow direction and combining a high accuracy of prediction with a low job time. Any case study starts with the partitioning of the reservoir into blocks. Then the statistics of the time-independent reservoir properties are analyzed by means of an IBM 360/25 unit. Using these results and the past production of oil, water and gas, a Fortran-program running on a CDC-3300 computer yields oil recoveries and the ratios of the relative permeabilities as a function of the local oil saturation for all blocks penetrated by mobile water. In order to assign kDwU/KDoU-functions to blocks not yet reached by the advancing water-front, correlation analysis is used to relate reservoir properties to kDwU/KDoU-functions. All these results are used as input into a CDC-660 Fortran program, allowing short-, medium-, and long-term forecasts as well as the handling of special problems.

  17. A computational model that predicts behavioral sensitivity to intracortical microstimulation

    Science.gov (United States)

    Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.

    2017-02-01

    Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.

  18. A Quantum Annealing Computer Team Addresses Climate Change Predictability

    Science.gov (United States)

    Halem, M. (Principal Investigator); LeMoigne, J.; Dorband, J.; Lomonaco, S.; Yesha, Ya.; Simpson, D.; Clune, T.; Pelissier, C.; Nearing, G.; Gentine, P.; hide

    2016-01-01

    The near confluence of the successful launch of the Orbiting Carbon Observatory2 on July 2, 2014 and the acceptance on August 20, 2015 by Google, NASA Ames Research Center and USRA of a 1152 qubit D-Wave 2X Quantum Annealing Computer (QAC), offered an exceptional opportunity to explore the potential of this technology to address the scientific prediction of global annual carbon uptake by land surface processes. At UMBC,we have collected and processed 20 months of global Level 2 light CO2 data as well as fluorescence data. In addition we have collected ARM data at 2sites in the US and Ameriflux data at more than 20 stations. J. Dorband has developed and implemented a multi-hidden layer Boltzmann Machine (BM) algorithm on the QAC. Employing the BM, we are calculating CO2 fluxes by training collocated OCO-2 level 2 CO2 data with ARM ground station tower data to infer to infer measured CO2 flux data. We generate CO2 fluxes with a regression analysis using these BM derived weights on the level 2 CO2 data for three Ameriflux sites distinct from the ARM stations. P. Gentine has negotiated for the access of K34 Ameriflux data in the Amazon and is applying a neural net to infer the CO2 fluxes. N. Talik validated the accuracy of the BM performance on the QAC against a restricted BM implementation on the IBM Softlayer Cloud with the Nvidia co-processors utilizing the same data sets. G. Nearing and K. Harrison have extended the GSFC LIS model with the NCAR Noah photosynthetic parameterization and have run a 10 year global prediction of the net ecosystem exchange. C. Pellisier is preparing a BM implementation of the Kalman filter data assimilation of CO2 fluxes. At UMBC, R. Prouty is conducting OSSE experiments with the LISNoah model on the IBM iDataPlex to simulate the impact of CO2 fluxes to improve the prediction of global annual carbon uptake. J. LeMoigne and D. Simpson have developed a neural net image registration system that will be used for MODIS ENVI and will be

  19. Extreme-Scale De Novo Genome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Georganas, Evangelos [Intel Corporation, Santa Clara, CA (United States); Hofmeyr, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Rokhsar, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.

    2017-09-26

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and the large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.

  20. mPUMA: a computational approach to microbiota analysis by de novo assembly of operational taxonomic units based on protein-coding barcode sequences.

    Science.gov (United States)

    Links, Matthew G; Chaban, Bonnie; Hemmingsen, Sean M; Muirhead, Kevin; Hill, Janet E

    2013-08-15

    Formation of operational taxonomic units (OTU) is a common approach to data aggregation in microbial ecology studies based on amplification and sequencing of individual gene targets. The de novo assembly of OTU sequences has been recently demonstrated as an alternative to widely used clustering methods, providing robust information from experimental data alone, without any reliance on an external reference database. Here we introduce mPUMA (microbial Profiling Using Metagenomic Assembly, http://mpuma.sourceforge.net), a software package for identification and analysis of protein-coding barcode sequence data. It was developed originally for Cpn60 universal target sequences (also known as GroEL or Hsp60). Using an unattended process that is independent of external reference sequences, mPUMA forms OTUs by DNA sequence assembly and is capable of tracking OTU abundance. mPUMA processes microbial profiles both in terms of the direct DNA sequence as well as in the translated amino acid sequence for protein coding barcodes. By forming OTUs and calculating abundance through an assembly approach, mPUMA is capable of generating inputs for several popular microbiota analysis tools. Using SFF data from sequencing of a synthetic community of Cpn60 sequences derived from the human vaginal microbiome, we demonstrate that mPUMA can faithfully reconstruct all expected OTU sequences and produce compositional profiles consistent with actual community structure. mPUMA enables analysis of microbial communities while empowering the discovery of novel organisms through OTU assembly.

  1. Predicting renal graft failure by sCD30 levels and de novo HLA antibodies at 1year post-transplantation.

    Science.gov (United States)

    Wang, Dong; Wu, Guojun; Chen, Jinhua; Yu, Ziqiang; Wu, Weizhen; Yang, Shunliang; Tan, Jianming

    2012-06-01

    HLA antibodies and sCD30 levels were detected in the serum sampled from 620 renal graft recipients at 1 year post-transplantation, which were followed up for 5 years. Six-year graft and patient survivals were 81.6% and 91.0%. HLA antibodies were detected in 45 recipients (7.3%), of whom there were 14 cases with class I antibodies, 26 cases with class II, and 5 cases with both class I and II. Much more graft loss was record in recipients with HLA antibodies than those without antibodies (60% vs. 15.1%, psCD30 levels were recorded in recipients suffering graft loss than the others (73.9±48.8 U/mL vs. 37.3±14.6 U/mL, psCD30 levels, recipients with low sCD30 levels (sCD30 on graft survival was not only independent but also additive. Therefore, post-transplantation monitoring of HLA antibodies and sCD30 levels is necessary and recipients with elevated sCD30 level and/or de novo HLA antibody should be paid more attention in order to achieve better graft survival. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Computational data sciences for assessment and prediction of climate extremes

    Science.gov (United States)

    Ganguly, A. R.

    2011-12-01

    Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.

  3. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  4. Computational Prediction of the Aerodynamic Characteristics of SSTO Vehicle Configurations

    OpenAIRE

    Keiichiro, FUJIMOTO; Kozo, FUJI

    2003-01-01

    Flow-fields around basic SSTO-rocket configurations are numerically simulated by the Navier-Stokes computations. The study starts with the simulations of the Apollo-type configuration, in which the simulated results arecomparing with NASA's experiments and the capability of CFD approach is discussed.Computed aerodynamic coeffcients of Apollo configuration agree well with the experiments at subsonic, transonic and supersonic regime at all angles of attack and the present computational approach...

  5. The European computer model for optronic system performance prediction (ECOMOS)

    NARCIS (Netherlands)

    Kessler, S.; Bijl, P.; Labarre, L.; Repasi, E.; Wittenstein, W.; Bürsing, H.

    2017-01-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The

  6. Testes Genéticos no Esporte: um Novo Modelo de Predição de Talentos? / Genetic Testing in Sport: a New Talent Prediction Model

    Directory of Open Access Journals (Sweden)

    Guilherme Giannini Artioli

    2015-03-01

    Full Text Available As ciências da atividade motora têm passado por avanços bastante rápidos em seu corpo de conhecimento nos últimos anos. Pelo menos em parte, tais avanços podem ser explicados pelo desenvolvimento igualmente rápido das técnicas de biologia molecular, e principalmente de seu emprego nos estudos envolvendo o esporte e o exercício físico. Nesse contexto, estamos vendo emergir um novo campo de estudo dentro das ciências do esporte e do exercício: a genética aplicada à atividade motora. Diferente dos estudos sobre genômica da atividade motora, que se preocupam em investigar os efeitos dos diferentes modelos de exercício agudo e crônico sobre a regulação da expressão gênica e proteica nas mais diversas condições, a genética da atividade motora tem como premissa a identificação de variações genéticas comuns, sejam elas de ordem estrutural (isto é, diferenças nas sequências de pares de bases ou funcional (que se referem a diferenças interindividuais no funcionamento dos genes explicadas por mecanismos que não contemplem alterações nas sequências de pares de bases, capazes de explicar porque pessoas de características similares apresentam tantas diferenças nos componentes da aptidão física relacionadas à saúde, nas capacidades físicas relacionadas ao desempenho esportivo, e nas adaptações fisiológicas que apresentam quando submetidas ao exercício agudo ou ao treinamento físico crônico. Em outras palavras, a genética da atividade motora preocupa-se em identificar características genéticas que expliquem a imensa variação interindividual no desempenho físico e esportivo que há muito já se conhece.

  7. Computational Embryology and Predictive Toxicology of Cleft Palate

    Science.gov (United States)

    Capacity to model and simulate key events in developmental toxicity using computational systems biology and biological knowledge steps closer to hazard identification across the vast landscape of untested environmental chemicals. In this context, we chose cleft palate as a model ...

  8. Application of Generative Autoencoder in De Novo Molecular Design.

    Science.gov (United States)

    Blaschke, Thomas; Olivecrona, Marcus; Engkvist, Ola; Bajorath, Jürgen; Chen, Hongming

    2018-01-01

    A major challenge in computational chemistry is the generation of novel molecular structures with desirable pharmacological and physiochemical properties. In this work, we investigate the potential use of autoencoder, a deep learning methodology, for de novo molecular design. Various generative autoencoders were used to map molecule structures into a continuous latent space and vice versa and their performance as structure generator was assessed. Our results show that the latent space preserves chemical similarity principle and thus can be used for the generation of analogue structures. Furthermore, the latent space created by autoencoders were searched systematically to generate novel compounds with predicted activity against dopamine receptor type 2 and compounds similar to known active compounds not included in the trainings set were identified. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  9. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  10. Computational Appliance for Rapid Prediction of Aircraft Trajectories, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Next generation air traffic management systems will be based to a greater degree on predicted trajectories of aircraft. Due to the iterative nature of future air...

  11. Can human experts predict solubility better than computers?

    Science.gov (United States)

    Boobier, Samuel; Osbourn, Anne; Mitchell, John B O

    2017-12-13

    In this study, we design and carry out a survey, asking human experts to predict the aqueous solubility of druglike organic compounds. We investigate whether these experts, drawn largely from the pharmaceutical industry and academia, can match or exceed the predictive power of algorithms. Alongside this, we implement 10 typical machine learning algorithms on the same dataset. The best algorithm, a variety of neural network known as a multi-layer perceptron, gave an RMSE of 0.985 log S units and an R 2 of 0.706. We would not have predicted the relative success of this particular algorithm in advance. We found that the best individual human predictor generated an almost identical prediction quality with an RMSE of 0.942 log S units and an R 2 of 0.723. The collection of algorithms contained a higher proportion of reasonably good predictors, nine out of ten compared with around half of the humans. We found that, for either humans or algorithms, combining individual predictions into a consensus predictor by taking their median generated excellent predictivity. While our consensus human predictor achieved very slightly better headline figures on various statistical measures, the difference between it and the consensus machine learning predictor was both small and statistically insignificant. We conclude that human experts can predict the aqueous solubility of druglike molecules essentially equally well as machine learning algorithms. We find that, for either humans or algorithms, combining individual predictions into a consensus predictor by taking their median is a powerful way of benefitting from the wisdom of crowds.

  12. Computed tomography scan based prediction of the vulnerable carotid plaque

    DEFF Research Database (Denmark)

    Diab, Hadi Mahmoud Haider; Rasmussen, Lars Melholt; Duvnjak, Stevo

    2017-01-01

    BACKGROUND: Primary to validate a commercial semi-automated computed tomography angiography (CTA) -software for vulnerable plaque detection compared to histology of carotid endarterectomy (CEA) specimens and secondary validating calcifications scores by in vivo CTA with ex vivo non......-contrast enhanced computed tomography (NCCT). METHODS: From January 2014 to October 2016 53 patients were included retrospectively, using a cross-sectional design. All patients underwent both CTA and CEA. Sixteen patients had their CEA specimen NCCT scanned. The semi-automated CTA software analyzed carotid stenosis...

  13. Performance predictions for solar-chemical convertors by computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luttmer, J.D.; Trachtenberg, I.

    1985-08-01

    A computer model which simulates the operation of Texas Instruments solar-chemical convertor (SCC) was developed. The model allows optimization of SCC processes, material, and configuration by facilitating decisions on tradeoffs among ease of manufacturing, power conversion efficiency, and cost effectiveness. The model includes various algorithms which define the electrical, electrochemical, and resistance parameters and which describ the operation of the discrete components of the SCC. Results of the model which depict the effect of material and geometric changes on various parameters are presented. The computer-calculated operation is compared with experimentall observed hydrobromic acid electrolysis rates.

  14. Coupling of EIT with computational lung modeling for predicting patient-specific ventilatory responses.

    Science.gov (United States)

    Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A

    2017-04-01

    Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results

  15. Advanced Computational Modeling Approaches for Shock Response Prediction

    Science.gov (United States)

    Derkevorkian, Armen; Kolaini, Ali R.; Peterson, Lee

    2015-01-01

    Motivation: (1) The activation of pyroshock devices such as explosives, separation nuts, pin-pullers, etc. produces high frequency transient structural response, typically from few tens of Hz to several hundreds of kHz. (2) Lack of reliable analytical tools makes the prediction of appropriate design and qualification test levels a challenge. (3) In the past few decades, several attempts have been made to develop methodologies that predict the structural responses to shock environments. (4) Currently, there is no validated approach that is viable to predict shock environments overt the full frequency range (i.e., 100 Hz to 10 kHz). Scope: (1) Model, analyze, and interpret space structural systems with complex interfaces and discontinuities, subjected to shock loads. (2) Assess the viability of a suite of numerical tools to simulate transient, non-linear solid mechanics and structural dynamics problems, such as shock wave propagation.

  16. Predicting Lexical Proficiency in Language Learner Texts Using Computational Indices

    Science.gov (United States)

    Crossley, Scott A.; Salsbury, Tom; McNamara, Danielle S.; Jarvis, Scott

    2011-01-01

    The authors present a model of lexical proficiency based on lexical indices related to vocabulary size, depth of lexical knowledge, and accessibility to core lexical items. The lexical indices used in this study come from the computational tool Coh-Metrix and include word length scores, lexical diversity values, word frequency counts, hypernymy…

  17. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ruchi D. Chande

    2017-01-01

    Full Text Available Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  18. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    Science.gov (United States)

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  19. Prediction of velocity and attitude of a yacht sailing upwind by computational fluid dynamics

    OpenAIRE

    Lee, Heebum; Park, Mi Yeon; Park, Sunho; Rhee, Shin Hyung

    2016-01-01

    One of the most important factors in sailing yacht design is accurate velocity prediction. Velocity prediction programs (VPP's) are widely used to predict velocity of sailing yachts. VPP's, which are primarily based on experimental data and experience of long years, however suffer limitations when applied in realistic conditions. Thus, in the present study, a high fidelity velocity prediction method using computational fluid dynamics (CFD) was proposed. Using the developed method, velocity an...

  20. A new computer system for security analysis in power systems; Um novo sistema computacional para analise de seguranca em sistemas de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Jose Vicente Canto dos; Gerhardt, Raul A.; Costa, Iverson F. [Universidade do Vale do Rio dos Sinos (PIPCA/UNISINOS), Sao Leopoldo, RS (Brazil). Programa de Pos-Graduacao em Computacao Aplicada], E-mail: jvcanto@unisinos.br

    2009-07-01

    In the electrical energy systems area, the problem of security analysis have been intensively studied in the last years. The main objective of this work has been the developing of a new computational system for static security analysis. Moreover, the considered system incorporates elements that they look to increase its adequacy to the real use in electrical utilities. The ample literature review achieved by the authors has showed that no analogous proposal was found. To validate the proposed system, the results of tests made on several power systems were described, including a real life Brazilian system of large dimensions. (author)

  1. NUMERICAL COMPUTATION AND PREDICTION OF ELECTRICITY CONSUMPTION IN TOBACCO INDUSTRY

    Directory of Open Access Journals (Sweden)

    Mirjana Laković

    2017-12-01

    Full Text Available Electricity is a key energy source in each country and an important condition for economic development. It is necessary to use modern methods and tools to predict energy consumption for different types of systems and weather conditions. In every industrial plant, electricity consumption presents one of the greatest operating costs. Monitoring and forecasting of this parameter provide the opportunity to rationalize the use of electricity and thus significantly reduce the costs. The paper proposes the prediction of energy consumption by a new time-series model. This involves time series models using a set of previously collected data to predict the future load. The most commonly used linear time series models are the AR (Autoregressive Model, MA (Moving Average and ARMA (Autoregressive Moving Average Model. The AR model is used in this paper. Using the AR (Autoregressive Model model, the Monte Carlo simulation method is utilized for predicting and analyzing the energy consumption change in the considered tobacco industrial plant. One of the main parts of the AR model is a seasonal pattern that takes into account the climatic conditions for a given geographical area. This part of the model was delineated by the Fourier transform and was used with the aim of avoiding the model complexity. As an example, the numerical results were performed for tobacco production in one industrial plant. A probabilistic range of input values is used to determine the future probabilistic level of energy consumption.

  2. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  3. Approximator: Predicting Interruptibility in Software Development with Commodity Computers

    DEFF Research Database (Denmark)

    Tell, Paolo; Jalaliniya, Shahram; Andersen, Kristian S. M.

    2015-01-01

    Assessing the presence and availability of a remote colleague is key in coordination in global software development but is not easily done using existing computer-mediated channels. Previous research has shown that automated estimation of interruptibility is feasible and can achieve a precision....... These early but promising results represent a starting point for designing tools with support for interruptibility capable of improving distributed awareness and cooperation to be used in global software development....

  4. Mould growth prediction by computational simulation on historic buildings

    OpenAIRE

    Krus, M.; Kilian, R.; Sedlbauer, K.

    2007-01-01

    Historical buildings are often renovated with a high expenditure of time and money without investigating and considering the causes of the damages. In many cases historic buildings can only be maintained by changing their usage. This change of use may influence the interior climate enormously. To assess the effect on the risk of mould growth on building parts or historic monuments a predictive model has been developed recently, describing the hygrothermal behaviour of the spore. It allows for...

  5. Global discriminative learning for higher-accuracy computational gene prediction.

    Directory of Open Access Journals (Sweden)

    Axel Bernal

    2007-03-01

    Full Text Available Most ab initio gene predictors use a probabilistic sequence model, typically a hidden Markov model, to combine separately trained models of genomic signals and content. By combining separate models of relevant genomic features, such gene predictors can exploit small training sets and incomplete annotations, and can be trained fairly efficiently. However, that type of piecewise training does not optimize prediction accuracy and has difficulty in accounting for statistical dependencies among different parts of the gene model. With genomic information being created at an ever-increasing rate, it is worth investigating alternative approaches in which many different types of genomic evidence, with complex statistical dependencies, can be integrated by discriminative learning to maximize annotation accuracy. Among discriminative learning methods, large-margin classifiers have become prominent because of the success of support vector machines (SVM in many classification tasks. We describe CRAIG, a new program for ab initio gene prediction based on a conditional random field model with semi-Markov structure that is trained with an online large-margin algorithm related to multiclass SVMs. Our experiments on benchmark vertebrate datasets and on regions from the ENCODE project show significant improvements in prediction accuracy over published gene predictors that use intrinsic features only, particularly at the gene level and on genes with long introns.

  6. Torque converter transient characteristics prediction using computational fluid dynamics

    International Nuclear Information System (INIS)

    Yamaguchi, T; Tanaka, K

    2012-01-01

    The objective of this research is to investigate the transient torque converter performance used in an automobile. A new technique in computational fluid dynamics is introduced, which includes the inertia of the turbine in a three dimensional simulation of the torque converter during a launch condition. The simulation results are compared to experimental test data with good agreement across the range of data. In addition, the simulated flow structure inside the torque converter is visualized and compared to results from a steady-state calculation.

  7. Can clinical prediction tools predict the need for computed tomography in blunt abdominal? A systematic review.

    Science.gov (United States)

    Sharples, Alistair; Brohi, Karim

    2016-08-01

    Blunt abdominal trauma is a common reason for admission to the Emergency Department. Early detection of injuries is an important goal but is often not straightforward as physical examination alone is not a good predictor of serious injury. Computed tomography (CT) has become the primary method for assessing the stable trauma patient. It has high sensitivity and specificity but there remains concern regarding the long term consequences of high doses of radiation. Therefore an accurate and reliable method of assessing which patients are at higher risk of injury and hence require a CT would be clinically useful. We perform a systematic review to investigate the use of clinical prediction tools (CPTs) for the identification of abdominal injuries in patients suffering blunt trauma. A literature search was performed using Medline, Embase, The Cochrane Library and NHS Evidence up to August 2014. English language, prospective and retrospective studies were included if they derived, validated or assessed a CPT, aimed at identifying intra-abdominal injuries or the need for intervention to treat an intra-abdominal after blunt trauma. Methodological quality was assessed using a 14 point scale. Performance was assessed predominantly by sensitivity. Seven relevant studies were identified. All studies were derivative studies and no CPT was validated in a separate study. There were large differences in the study design, composition of the CPTs, the outcomes analysed and the methodological quality of the included studies. Sensitivities ranged from 86 to 100%. The highest performing CPT had a lower limit of the 95% CI of 95.8% and was of high methodological quality (11 of 14). Had this rule been applied to the population then 25.1% of patients would have avoided a CT scan. Seven CPTs were identified of varying designs and methodological quality. All demonstrate relatively high sensitivity with some achieving very high sensitivity whilst still managing to reduce the number of CTs

  8. A survey on computational intelligence approaches for predictive modeling in prostate cancer

    OpenAIRE

    Cosma, G; Brown, D; Archer, M; Khan, M; Pockley, AG

    2017-01-01

    Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty an...

  9. Preservice Teachers' Computer Use in Single Computer Training Courses; Relationships and Predictions

    Science.gov (United States)

    Zogheib, Salah

    2015-01-01

    Single computer courses offered at colleges of education are expected to provide preservice teachers with the skills and expertise needed to adopt computer technology in their future classrooms. However, preservice teachers still find difficulty adopting such technology. This research paper investigated relationships among preservice teachers'…

  10. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs.

    Directory of Open Access Journals (Sweden)

    Qifan Kuang

    Full Text Available Early and accurate identification of adverse drug reactions (ADRs is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs.In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper.Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  11. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  12. Brain systems for probabilistic and dynamic prediction: computational specificity and integration.

    Directory of Open Access Journals (Sweden)

    Jill X O'Reilly

    2013-09-01

    Full Text Available A computational approach to functional specialization suggests that brain systems can be characterized in terms of the types of computations they perform, rather than their sensory or behavioral domains. We contrasted the neural systems associated with two computationally distinct forms of predictive model: a reinforcement-learning model of the environment obtained through experience with discrete events, and continuous dynamic forward modeling. By manipulating the precision with which each type of prediction could be used, we caused participants to shift computational strategies within a single spatial prediction task. Hence (using fMRI we showed that activity in two brain systems (typically associated with reward learning and motor control could be dissociated in terms of the forms of computations that were performed there, even when both systems were used to make parallel predictions of the same event. A region in parietal cortex, which was sensitive to the divergence between the predictions of the models and anatomically connected to both computational networks, is proposed to mediate integration of the two predictive modes to produce a single behavioral output.

  13. A computational approach to mechanistic and predictive toxicology of pesticides

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Vinggaard, Anne Marie; Hadrup, Niels

    2014-01-01

    Emerging challenges of managing and interpreting large amounts of complex biological data have given rise to the growing field of computational biology. We investigated the applicability of an integrated systems toxicology approach on five selected pesticides to get an overview of their modes...... of action in humans, to group them according to their modes of action, and to hypothesize on their potential effects on human health. We extracted human proteins associated to prochloraz, tebuconazole, epoxiconazole, procymidone, and mancozeb and enriched each protein set by using a high confidence human......, and procymidone exerted their effects mainly via interference with steroidogenesis and nuclear receptors. Prochloraz was associated to a large number of human diseases, and together with tebuconazole showed several significant associations to Testicular Dysgenesis Syndrome. Mancozeb showed a differential mode...

  14. Nanotoxicity prediction using computational modelling - review and future directions

    Science.gov (United States)

    Saini, Bhavna; Srivastava, Sumit

    2018-04-01

    Nanomaterials has stimulated various outlooks for future in a number of industries and scientific ventures. A number of applications such as cosmetics, medicines, and electronics are employing nanomaterials due to their various compelling properties. The unending growth of nanomaterials usage in our daily life has escalated the health and environmental risks. Early nanotoxicity recognition is a big challenge. Various researches are going on in the field of nanotoxicity, which comprised of several problems such as inadequacy of proper datasets, lack of appropriate rules and characterization of nanomaterials. Computational modelling would be beneficial asset for nanomaterials researchers because it can foresee the toxicity, rest on previous experimental data. In this study, we have reviewed sufficient work demonstrating a proper pathway to proceed with QSAR analysis of Nanomaterials for toxicity modelling. The paper aims at providing comprehensive insight of Nano QSAR, various theories, tools and approaches used, along with an outline for future research directions to work on.

  15. Prediction of the behavior of pedestrian bridges using computer models

    Directory of Open Access Journals (Sweden)

    Jonathan José Cala Monroy

    2017-07-01

    Full Text Available Introduction: The present article is aimed to present a brief introduction of the issues related to the low-frequency vibrations, by indicating human walking as its relevant source which affecting structures of the footbridges and is turned into inconveniences to the pedestrian traffic. Objective: The main objective of this research paper is to explain the most common methods used by engineers for the evaluation of the vibrations and their effects as well as their limitations, furthermore a computer modeling technique was developed in order to approach it to the reality of the phenomenon of vibrations in pedestrian bridges. Methodology: The present work was divided into main phases: The first phase was a conceptual bibliographical review of the subject of floor vibrations by focusing on the use of the Design Guide No. 11 of the American Institute of Steel Constructions, with regard to the second phase, it had to do with the developing of a computer model which included a definition of variables, the elaboration of a dynamic model of the structure, the calibration of the model, the evaluation of the parameters under study and the analysis of results and conclusions. Results: Consequently, and according to the preliminary stages, the results of the acceleration were obtained to different frequencies and to different degrees of damping by observing that the chosen sample was potentially susceptible between four and eight Hz ranges, hence when resonances took place the mentioned structure presented a peak acceleration above the threshold recommended by human beings comfort related to pedestrian bridges. Conclusions: To conclude it can be said that through the appropriate modeling techniques and finite elements convenient and reliable results should be accomplished that leading the design process of structures as pedestrian bridges.

  16. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  17. Prediction of velocity and attitude of a yacht sailing upwind by computational fluid dynamics

    Directory of Open Access Journals (Sweden)

    Heebum Lee

    2016-01-01

    Full Text Available One of the most important factors in sailing yacht design is accurate velocity prediction. Velocity prediction programs (VPP's are widely used to predict velocity of sailing yachts. VPP's, which are primarily based on experimental data and experience of long years, however suffer limitations when applied in realistic conditions. Thus, in the present study, a high fidelity velocity prediction method using computational fluid dynamics (CFD was proposed. Using the developed method, velocity and attitude of a 30 feet sloop yacht, which was developed by Korea Research Institute of Ship and Ocean (KRISO and termed KORDY30, were predicted in upwind sailing condition.

  18. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  19. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  20. SOFT COMPUTING SINGLE HIDDEN LAYER MODELS FOR SHELF LIFE PREDICTION OF BURFI

    Directory of Open Access Journals (Sweden)

    Sumit Goyal

    2012-05-01

    Full Text Available Burfi is an extremely popular sweetmeat, which is prepared by desiccating the standardized water buffalo milk. Soft computing feedforward single layer models were developed for predicting the shelf life of burfi stored at 30g.C. The data of the product relating to moisture, titratable acidity, free fatty acids, tyrosine, and peroxide value were used as input variables, and the overall acceptability score as output variable. The results showed excellent agreement between the experimental and the predicted data, suggesting that the developed soft computing model can alternatively be used for predicting the shelf life of burfi.

  1. Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification

    Directory of Open Access Journals (Sweden)

    Richard J Allen

    2017-03-01

    Full Text Available Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing ‘transfer function’. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate of the pathway as a whole.

  2. Modeling fructose-load-induced hepatic de-novo lipogenesis by model simplification.

    Science.gov (United States)

    Allen, Richard J; Musante, Cynthia J

    2017-01-01

    Hepatic de-novo lipogenesis is a metabolic process implemented in the pathogenesis of type 2 diabetes. Clinically, the rate of this process can be ascertained by use of labeled acetate and stimulation by fructose administration. A systems pharmacology model of this process is desirable because it facilitates the description, analysis, and prediction of this experiment. Due to the multiple enzymes involved in de-novo lipogenesis, and the limited data, it is desirable to use single functional expressions to encapsulate the flux between multiple enzymes. To accomplish this we developed a novel simplification technique which uses the available information about the properties of the individual enzymes to bound the parameters of a single governing 'transfer function'. This method should be applicable to any model with linear chains of enzymes that are well stimulated. We validated this approach with computational simulations and analytical justification in a limiting case. Using this technique we generated a simple model of hepatic de-novo lipogenesis in these experimental conditions that matched prior data. This model can be used to assess pharmacological intervention at specific points on this pathway. We have demonstrated this with prospective simulation of acetyl-CoA carboxylase inhibition. This simplification technique suggests how the constituent properties of an enzymatic chain of reactions gives rise to the sensitivity (to substrate) of the pathway as a whole.

  3. Novel computational methods to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-12-01

    Computational drug repurposing aims at finding new medical uses for existing drugs. The identification of novel drug-target interactions (DTIs) can be a useful part of such a task. Computational determination of DTIs is a convenient strategy for systematic screening of a large number of drugs in the attempt to identify new DTIs at low cost and with reasonable accuracy. This necessitates development of accurate computational methods that can help focus on the follow-up experimental validation on a smaller number of highly likely targets for a drug. Although many methods have been proposed for computational DTI prediction, they suffer the high false positive prediction rate or they do not predict the effect that drugs exert on targets in DTIs. In this report, first, we present a comprehensive review of the recent progress in the field of DTI prediction from data-centric and algorithm-centric perspectives. The aim is to provide a comprehensive review of computational methods for identifying DTIs, which could help in constructing more reliable methods. Then, we present DDR, an efficient method to predict the existence of DTIs. DDR achieves significantly more accurate results compared to the other state-of-theart methods. As supported by independent evidences, we verified as correct 22 out of the top 25 DDR DTIs predictions. This validation proves the practical utility of DDR, suggesting that DDR can be used as an efficient method to identify 5 correct DTIs. Finally, we present DDR-FE method that predicts the effect types of a drug on its target. On different representative datasets, under various test setups, and using different performance measures, we show that DDR-FE achieves extremely good performance. Using blind test data, we verified as correct 2,300 out of 3,076 DTIs effects predicted by DDR-FE. This suggests that DDR-FE can be used as an efficient method to identify correct effects of a drug on its target.

  4. Five-year clinical and functional multislice computed tomography angiographic results after coronary implantation of the fully resorbable polymeric everolimus-eluting scaffold in patients with de novo coronary artery disease: the ABSORB cohort A trial.

    Science.gov (United States)

    Onuma, Yoshinobu; Dudek, Dariusz; Thuesen, Leif; Webster, Mark; Nieman, Koen; Garcia-Garcia, Hector M; Ormiston, John A; Serruys, Patrick W

    2013-10-01

    This study sought to demonstrate the 5-year clinical and functional multislice computed tomography angiographic results after implantation of the fully resorbable everolimus-eluting scaffold (Absorb BVS, Abbott Vascular, Santa Clara, California). Multimodality imaging of the first-in-humans trial using a ABSORB BVS scaffold demonstrated at 2 years the bioresorption of the device while preventing restenosis. However, the long-term safety and efficacy of this therapy remain to be documented. In the ABSORB cohort A trial (ABSORB Clinical Investigation, Cohort A [ABSORB A] Everolimus-Eluting Coronary Stent System Clinical Investigation), 30 patients with a single de novo coronary artery lesion were treated with the fully resorbable everolimus-eluting Absorb scaffold at 4 centers. As an optional investigation in 3 of the 4 centers, the patients underwent multislice computed tomography (MSCT) angiography at 18 months and 5 years. Acquired MSCT data were analyzed at an independent core laboratory (Cardialysis, Rotterdam, the Netherlands) for quantitative analysis of lumen dimensions and was further processed for calculation of fractional flow reserve (FFR) at another independent core laboratory (Heart Flow, Redwood City, California). Five-year clinical follow-up is available for 29 patients. One patient withdrew consent after 6 months, but the vital status of this patient remains available. At 46 days, 1 patient experienced a single episode of chest pain and underwent a target lesion revascularization with a slight troponin increase after the procedure. At 5 years, the ischemia-driven major adverse cardiac event rate of 3.4% remained unchanged. Clopidogrel was discontinued in all but 1 patient. Scaffold thrombosis was not observed in any patient. Two noncardiac deaths were reported, 1 caused by duodenal perforation and the other from Hodgkin's disease. At 5 years, 18 patients underwent MSCT angiography. All scaffolds were patent, with a median minimal lumen area of 3

  5. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  6. Proposal of computation chart for general use for diffusion prediction of discharged warm water

    International Nuclear Information System (INIS)

    Wada, Akira; Kadoyu, Masatake

    1976-01-01

    The authors have developed the unique simulation analysis method using the numerical models for the prediction of discharged warm water diffusion. At the present stage, the method is adopted for the precise analysis computation in order to make the prediction of the diffusion of discharged warm water at each survey point, but instead of this method, it is strongly requested that some simple and easy prediction methods should be established. For the purpose of meeting this demand, in this report, the computation chart for general use is given to predict simply the diffusion range of discharged warm water, after classifying the semi-infinite sea region into several flow patterns according to the sea conditions and conducting the systematic simulation analysis with the numerical model of each pattern, respectively. (1) Establishment of the computation conditions: The special sea region was picked up as the area to be investigated, which is semi-infinite facing the outer sea and along the rectilineal coast line from many sea regions surrounding Japan, and from the viewpoint of the flow and the diffusion characteristics, the sea region was classified into three patterns. 51 cases in total various parameters were obtained, and finally the simulation analysis was performed. (2) Drawing up the general use chart: 28 sheets of the computation chart for general use were drawn, which are available for computing the approximate temperature rise caused by the discharged warm water diffusion. The example of Anegasaki Thermal Power Station is given. (Kako, I.)

  7. Computer program for prediction of the deposition of material released from fixed and rotary wing aircraft

    Science.gov (United States)

    Teske, M. E.

    1984-01-01

    This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.

  8. De novo pathway-based biomarker identification

    DEFF Research Database (Denmark)

    Alcaraz, Nicolas; List, Markus; Batra, Richa

    2017-01-01

    in a large cohort of breast cancer samples from The Cancer Genome Atlas (TCGA) revealed that MGs are considerably more stable than SG models, while also providing valuable insight into the cancer hallmarks that drive them. In addition, when tested on an independent benchmark non-TCGA dataset, MG features......Gene expression profiles have been extensively discussed as an aid to guide the therapy by predicting disease outcome for the patients suffering from complex diseases, such as cancer. However, prediction models built upon single-gene (SG) features show poor stability and performance on independent...... on their molecular subtypes can provide a detailed view of the disease and lead to more personalized therapies. We propose and discuss a novel MG approach based on de novo pathways, which for the first time have been used as features in a multi-class setting to predict cancer subtypes. Comprehensive evaluation...

  9. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    Science.gov (United States)

    2016-11-28

    Title: Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline Christopher J. Smalt...representation of speech intelligibility in noise. The auditory-periphery model of Zilany et al. (JASA 2009,2014) is used to make predictions of...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  10. Three-dimensional computed tomographic volumetry precisely predicts the postoperative pulmonary function.

    Science.gov (United States)

    Kobayashi, Keisuke; Saeki, Yusuke; Kitazawa, Shinsuke; Kobayashi, Naohiro; Kikuchi, Shinji; Goto, Yukinobu; Sakai, Mitsuaki; Sato, Yukio

    2017-11-01

    It is important to accurately predict the patient's postoperative pulmonary function. The aim of this study was to compare the accuracy of predictions of the postoperative residual pulmonary function obtained with three-dimensional computed tomographic (3D-CT) volumetry with that of predictions obtained with the conventional segment-counting method. Fifty-three patients scheduled to undergo lung cancer resection, pulmonary function tests, and computed tomography were enrolled in this study. The postoperative residual pulmonary function was predicted based on the segment-counting and 3D-CT volumetry methods. The predicted postoperative values were compared with the results of postoperative pulmonary function tests. Regarding the linear correlation coefficients between the predicted postoperative values and the measured values, those obtained using the 3D-CT volumetry method tended to be higher than those acquired using the segment-counting method. In addition, the variations between the predicted and measured values were smaller with the 3D-CT volumetry method than with the segment-counting method. These results were more obvious in COPD patients than in non-COPD patients. Our findings suggested that the 3D-CT volumetry was able to predict the residual pulmonary function more accurately than the segment-counting method, especially in patients with COPD. This method might lead to the selection of appropriate candidates for surgery among patients with a marginal pulmonary function.

  11. Free energy minimization to predict RNA secondary structures and computational RNA design.

    Science.gov (United States)

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  12. Towards pattern generation and chaotic series prediction with photonic reservoir computers

    Science.gov (United States)

    Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge

    2016-03-01

    Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.

  13. Role of computer graphics in space telerobotics - Preview and predictive displays

    Science.gov (United States)

    Bejczy, Antal K.; Venema, Steven; Kim, Won S.

    1991-01-01

    The application of computer graphics in space telerobotics research and development work is briefly reviewed and illustrated by specific examples implemented in real time operation. The applications are discussed under the following four major categories: preview displays, predictive displays, sensor data displays, and control system status displays.

  14. Computer code to predict the heat of explosion of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B.

    2009-01-01

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (ΔH e ) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R 2 = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials

  15. Prediction of intramuscular fat levels in Texel lamb loins using X-ray computed tomography scanning.

    Science.gov (United States)

    Clelland, N; Bunger, L; McLean, K A; Conington, J; Maltin, C; Knott, S; Lambe, N R

    2014-10-01

    For the consumer, tenderness, juiciness and flavour are often described as the most important factors for meat eating quality, all of which have a close association with intramuscular fat (IMF). X-ray computed tomography (CT) can measure fat, muscle and bone volumes and weights, in vivo in sheep and CT predictions of carcass composition have been used in UK sheep breeding programmes over the last few decades. This study aimed to determine the most accurate combination of CT variables to predict IMF percentage of M. longissimus lumborum in Texel lambs. As expected, predicted carcass fat alone accounted for a moderate amount of the variation (R(2)=0.51) in IMF. Prediction accuracies were significantly improved (Adj R(2)>0.65) using information on fat and muscle densities measured from three CT reference scans, showing that CT can provide an accurate prediction of IMF in the loin of purebred Texel sheep. Copyright © 2014. Published by Elsevier Ltd.

  16. THE INTEGRATED USE OF COMPUTATIONAL CHEMISTRY, SCANNING PROBE MICROSCOPY, AND VIRTUAL REALITY TO PREDICT THE CHEMICAL REACTIVITY OF ENVIRONMENTAL SURFACES

    Science.gov (United States)

    In the last decade three new techniques scanning probe microscopy (SPM), virtual reality (YR) and computational chemistry ave emerged with the combined capability of a priori predicting the chemically reactivity of environmental surfaces. Computational chemistry provides the cap...

  17. Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.

    Science.gov (United States)

    Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark

    2018-01-01

    Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.

  18. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2016-01-01

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  19. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.

    2016-01-06

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  20. Use of computer-assisted prediction of toxic effects of chemical substances

    International Nuclear Information System (INIS)

    Simon-Hettich, Brigitte; Rothfuss, Andreas; Steger-Hartmann, Thomas

    2006-01-01

    The current revision of the European policy for the evaluation of chemicals (REACH) has lead to a controversy with regard to the need of additional animal safety testing. To avoid increases in animal testing but also to save time and resources, alternative in silico or in vitro tests for the assessment of toxic effects of chemicals are advocated. The draft of the original document issued in 29th October 2003 by the European Commission foresees the use of alternative methods but does not give further specification on which methods should be used. Computer-assisted prediction models, so-called predictive tools, besides in vitro models, will likely play an essential role in the proposed repertoire of 'alternative methods'. The current discussion has urged the Advisory Committee of the German Toxicology Society to present its position on the use of predictive tools in toxicology. Acceptable prediction models already exist for those toxicological endpoints which are based on well-understood mechanism, such as mutagenicity and skin sensitization, whereas mechanistically more complex endpoints such as acute, chronic or organ toxicities currently cannot be satisfactorily predicted. A potential strategy to assess such complex toxicities will lie in their dissection into models for the different steps or pathways leading to the final endpoint. Integration of these models should result in a higher predictivity. Despite these limitations, computer-assisted prediction tools already today play a complementary role for the assessment of chemicals for which no data is available or for which toxicological testing is impractical due to the lack of availability of sufficient compounds for testing. Furthermore, predictive tools offer support in the screening and the subsequent prioritization of compound for further toxicological testing, as expected within the scope of the European REACH program. This program will also lead to the collection of high-quality data which will broaden the

  1. Computer-aided and predictive models for design of controlled release of pesticides

    DEFF Research Database (Denmark)

    Suné, Nuria Muro; Gani, Rafiqul

    2004-01-01

    In the field of pesticide controlled release technology, a computer based model that can predict the delivery of the Active Ingredient (AI) from fabricated units is important for purposes of product design and marketing. A model for the release of an M from a microcapsule device is presented...... in this paper, together with a specific case study application to highlight its scope and significance. The paper also addresses the need for predictive models and proposes a computer aided modelling framework for achieving it through the development and introduction of reliable and predictive constitutive...... models. A group-contribution based model for one of the constitutive variables (AI solubility in polymers) is presented together with examples of application and validation....

  2. A community computational challenge to predict the activity of pairs of compounds.

    Science.gov (United States)

    Bansal, Mukesh; Yang, Jichen; Karan, Charles; Menden, Michael P; Costello, James C; Tang, Hao; Xiao, Guanghua; Li, Yajuan; Allen, Jeffrey; Zhong, Rui; Chen, Beibei; Kim, Minsoo; Wang, Tao; Heiser, Laura M; Realubit, Ronald; Mattioli, Michela; Alvarez, Mariano J; Shen, Yao; Gallahan, Daniel; Singer, Dinah; Saez-Rodriguez, Julio; Xie, Yang; Stolovitzky, Gustavo; Califano, Andrea

    2014-12-01

    Recent therapeutic successes have renewed interest in drug combinations, but experimental screening approaches are costly and often identify only small numbers of synergistic combinations. The DREAM consortium launched an open challenge to foster the development of in silico methods to computationally rank 91 compound pairs, from the most synergistic to the most antagonistic, based on gene-expression profiles of human B cells treated with individual compounds at multiple time points and concentrations. Using scoring metrics based on experimental dose-response curves, we assessed 32 methods (31 community-generated approaches and SynGen), four of which performed significantly better than random guessing. We highlight similarities between the methods. Although the accuracy of predictions was not optimal, we find that computational prediction of compound-pair activity is possible, and that community challenges can be useful to advance the field of in silico compound-synergy prediction.

  3. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  4. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    International Nuclear Information System (INIS)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-01-01

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  5. A computational environment for long-term multi-feature and multi-algorithm seizure prediction.

    Science.gov (United States)

    Teixeira, C A; Direito, B; Costa, R P; Valderrama, M; Feldwisch-Drentrup, H; Nikolopoulos, S; Le Van Quyen, M; Schelter, B; Dourado, A

    2010-01-01

    The daily life of epilepsy patients is constrained by the possibility of occurrence of seizures. Until now, seizures cannot be predicted with sufficient sensitivity and specificity. Most of the seizure prediction studies have been focused on a small number of patients, and frequently assuming unrealistic hypothesis. This paper adopts the view that for an appropriate development of reliable predictors one should consider long-term recordings and several features and algorithms integrated in one software tool. A computational environment, based on Matlab (®), is presented, aiming to be an innovative tool for seizure prediction. It results from the need of a powerful and flexible tool for long-term EEG/ECG analysis by multiple features and algorithms. After being extracted, features can be subjected to several reduction and selection methods, and then used for prediction. The predictions can be conducted based on optimized thresholds or by applying computational intelligence methods. One important aspect is the integrated evaluation of the seizure prediction characteristic of the developed predictors.

  6. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  8. Integrating Crop Growth Models with Whole Genome Prediction through Approximate Bayesian Computation.

    Directory of Open Access Journals (Sweden)

    Frank Technow

    Full Text Available Genomic selection, enabled by whole genome prediction (WGP methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E, continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC, a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics.

  9. POLYAR, a new computer program for prediction of poly(A sites in human sequences

    Directory of Open Access Journals (Sweden)

    Qamar Raheel

    2010-11-01

    Full Text Available Abstract Background mRNA polyadenylation is an essential step of pre-mRNA processing in eukaryotes. Accurate prediction of the pre-mRNA 3'-end cleavage/polyadenylation sites is important for defining the gene boundaries and understanding gene expression mechanisms. Results 28761 human mapped poly(A sites have been classified into three classes containing different known forms of polyadenylation signal (PAS or none of them (PAS-strong, PAS-weak and PAS-less, respectively and a new computer program POLYAR for the prediction of poly(A sites of each class was developed. In comparison with polya_svm (till date the most accurate computer program for prediction of poly(A sites while searching for PAS-strong poly(A sites in human sequences, POLYAR had a significantly higher prediction sensitivity (80.8% versus 65.7% and specificity (66.4% versus 51.7% However, when a similar sort of search was conducted for PAS-weak and PAS-less poly(A sites, both programs had a very low prediction accuracy, which indicates that our knowledge about factors involved in the determination of the poly(A sites is not sufficient to identify such polyadenylation regions. Conclusions We present a new classification of polyadenylation sites into three classes and a novel computer program POLYAR for prediction of poly(A sites/regions of each of the class. In tests, POLYAR shows high accuracy of prediction of the PAS-strong poly(A sites, though this program's efficiency in searching for PAS-weak and PAS-less poly(A sites is not very high but is comparable to other available programs. These findings suggest that additional characteristics of such poly(A sites remain to be elucidated. POLYAR program with a stand-alone version for downloading is available at http://cub.comsats.edu.pk/polyapredict.htm.

  10. Computational Prediction of Human Salivary Proteins from Blood Circulation and Application to Diagnostic Biomarker Identification

    Science.gov (United States)

    Wang, Jiaxin; Liang, Yanchun; Wang, Yan; Cui, Juan; Liu, Ming; Du, Wei; Xu, Ying

    2013-01-01

    Proteins can move from blood circulation into salivary glands through active transportation, passive diffusion or ultrafiltration, some of which are then released into saliva and hence can potentially serve as biomarkers for diseases if accurately identified. We present a novel computational method for predicting salivary proteins that come from circulation. The basis for the prediction is a set of physiochemical and sequence features we found to be discerning between human proteins known to be movable from circulation to saliva and proteins deemed to be not in saliva. A classifier was trained based on these features using a support-vector machine to predict protein secretion into saliva. The classifier achieved 88.56% average recall and 90.76% average precision in 10-fold cross-validation on the training data, indicating that the selected features are informative. Considering the possibility that our negative training data may not be highly reliable (i.e., proteins predicted to be not in saliva), we have also trained a ranking method, aiming to rank the known salivary proteins from circulation as the highest among the proteins in the general background, based on the same features. This prediction capability can be used to predict potential biomarker proteins for specific human diseases when coupled with the information of differentially expressed proteins in diseased versus healthy control tissues and a prediction capability for blood-secretory proteins. Using such integrated information, we predicted 31 candidate biomarker proteins in saliva for breast cancer. PMID:24324552

  11. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  12. Computational prediction of miRNA genes from small RNA sequencing data

    Directory of Open Access Journals (Sweden)

    Wenjing eKang

    2015-01-01

    Full Text Available Next-generation sequencing now for the first time allows researchers to gauge the depth and variation of entire transcriptomes. However, now as rare transcripts can be detected that are present in cells at single copies, more advanced computational tools are needed to accurately annotate and profile them. miRNAs are 22 nucleotide small RNAs (sRNAs that post-transcriptionally reduce the output of protein coding genes. They have established roles in numerous biological processes, including cancers and other diseases. During miRNA biogenesis, the sRNAs are sequentially cleaved from precursor molecules that have a characteristic hairpin RNA structure. The vast majority of new miRNA genes that are discovered are mined from small RNA sequencing (sRNA-seq, which can detect more than a billion RNAs in a single run. However, given that many of the detected RNAs are degradation products from all types of transcripts, the accurate identification of miRNAs remain a non-trivial computational problem. Here we review the tools available to predict animal miRNAs from sRNA sequencing data. We present tools for generalist and specialist use cases, including prediction from massively pooled data or in species without reference genome. We also present wet-lab methods used to validate predicted miRNAs, and approaches to computationally benchmark prediction accuracy. For each tool, we reference validation experiments and benchmarking efforts. Last, we discuss the future of the field.

  13. Predicting the Pullout Capacity of Small Ground Anchors Using Nonlinear Integrated Computing Techniques

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.

  14. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    Science.gov (United States)

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  15. Self-learning computers for surgical planning and prediction of postoperative alignment.

    Science.gov (United States)

    Lafage, Renaud; Pesenti, Sébastien; Lafage, Virginie; Schwab, Frank J

    2018-02-01

    In past decades, the role of sagittal alignment has been widely demonstrated in the setting of spinal conditions. As several parameters can be affected, identifying the driver of the deformity is the cornerstone of a successful treatment approach. Despite the importance of restoring sagittal alignment for optimizing outcome, this task remains challenging. Self-learning computers and optimized algorithms are of great interest in spine surgery as in that they facilitate better planning and prediction of postoperative alignment. Nowadays, computer-assisted tools are part of surgeons' daily practice; however, the use of such tools remains to be time-consuming. NARRATIVE REVIEW AND RESULTS: Computer-assisted methods for the prediction of postoperative alignment consist of a three step analysis: identification of anatomical landmark, definition of alignment objectives, and simulation of surgery. Recently, complex rules for the prediction of alignment have been proposed. Even though this kind of work leads to more personalized objectives, the number of parameters involved renders it difficult for clinical use, stressing the importance of developing computer-assisted tools. The evolution of our current technology, including machine learning and other types of advanced algorithms, will provide powerful tools that could be useful in improving surgical outcomes and alignment prediction. These tools can combine different types of advanced technologies, such as image recognition and shape modeling, and using this technique, computer-assisted methods are able to predict spinal shape. The development of powerful computer-assisted methods involves the integration of several sources of information such as radiographic parameters (X-rays, MRI, CT scan, etc.), demographic information, and unusual non-osseous parameters (muscle quality, proprioception, gait analysis data). In using a larger set of data, these methods will aim to mimic what is actually done by spine surgeons, leading

  16. De Novo Construction of Redox Active Proteins.

    Science.gov (United States)

    Moser, C C; Sheehan, M M; Ennist, N M; Kodali, G; Bialas, C; Englander, M T; Discher, B M; Dutton, P L

    2016-01-01

    Relatively simple principles can be used to plan and construct de novo proteins that bind redox cofactors and participate in a range of electron-transfer reactions analogous to those seen in natural oxidoreductase proteins. These designed redox proteins are called maquettes. Hydrophobic/hydrophilic binary patterning of heptad repeats of amino acids linked together in a single-chain self-assemble into 4-alpha-helix bundles. These bundles form a robust and adaptable frame for uncovering the default properties of protein embedded cofactors independent of the complexities introduced by generations of natural selection and allow us to better understand what factors can be exploited by man or nature to manipulate the physical chemical properties of these cofactors. Anchoring of redox cofactors such as hemes, light active tetrapyrroles, FeS clusters, and flavins by His and Cys residues allow cofactors to be placed at positions in which electron-tunneling rates between cofactors within or between proteins can be predicted in advance. The modularity of heptad repeat designs facilitates the construction of electron-transfer chains and novel combinations of redox cofactors and new redox cofactor assisted functions. Developing de novo designs that can support cofactor incorporation upon expression in a cell is needed to support a synthetic biology advance that integrates with natural bioenergetic pathways. © 2016 Elsevier Inc. All rights reserved.

  17. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  18. Predicting Flow Reversals in a Computational Fluid Dynamics Simulated Thermosyphon Using Data Assimilation.

    Science.gov (United States)

    Reagan, Andrew J; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M

    2016-01-01

    A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth's weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction.

  19. Predicting Flow Reversals in a Computational Fluid Dynamics Simulated Thermosyphon Using Data Assimilation.

    Directory of Open Access Journals (Sweden)

    Andrew J Reagan

    Full Text Available A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth's weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction.

  20. A computational method to predict fluid-structure interaction of pressure relief valves

    Energy Technology Data Exchange (ETDEWEB)

    Kang, S. K.; Lee, D. H.; Park, S. K.; Hong, S. R. [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    An effective CFD (Computational fluid dynamics) method to predict important performance parameters, such as blowdown and chattering, for pressure relief valves in NPPs is provided in the present study. To calculate the valve motion, 6DOF (six degree of freedom) model is used. A chimera overset grid method is utilized to this study for the elimination of grid remeshing problem, when the disk moves. Further, CFD-Fastran which is developed by CFD-RC for compressible flow analysis is applied to an 1' safety valve. The prediction results ensure the applicability of the presented method in this study.

  1. Cutting edge: identification of novel T cell epitopes in Lol p5a by computational prediction.

    Science.gov (United States)

    de Lalla, C; Sturniolo, T; Abbruzzese, L; Hammer, J; Sidoli, A; Sinigaglia, F; Panina-Bordignon, P

    1999-08-15

    Although atopic allergy affects Lol p5a allergen from rye grass. In vitro binding studies confirmed the promiscuous binding characteristics of these peptides. Moreover, most of the predicted ligands were novel T cell epitopes that were able to stimulate T cells from atopic patients. We generated a panel of Lol p5a-specific T cell clones, the majority of which recognized the peptides in a cross-reactive fashion. The computational prediction of DR ligands might thus allow the design of T cell epitopes with potential useful application in novel immunotherapy strategies.

  2. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    OpenAIRE

    Ruijian Zhang; Deren Li

    2017-01-01

    Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the ass...

  3. Pesquisa de novos elementos Pesquisa de novos elementos

    Directory of Open Access Journals (Sweden)

    Gil Mário de Macedo Grassi

    1978-11-01

    Full Text Available The present study deals with the discovery of new elements synthesized by man. The introduction discusses in general the theories about nuclear transmutation, which is the method employed in these syntheses. The study shows the importance of the Periodical Table since it is through this table that one can reach a prevision of new elements and its, properties. The discoveries of the transuranic elements, together wich the data of their first preparations are also tabulated The stability of these elements is also discussed, and future speculations are showedNeste trabalho estuda-se, teoricamente, a descoberta de novos elementos sintetizados pelo homem Na introdução apresentamos um apanhado geral sobre as teorias a respeito da transmutação nuclear, que é o método utilizado nestas sínteses. Em seguida, mostramos a importância da Tabela Periódica, pois é através dela que se chega à previsão dos novos elementos e de suas propriedades. As descobertas dos transurânicos, Já realizadas com êxito, juntamente com os dados de suas primeiras preparações são tabelados. A estabilidade destes novos elementos também é discutida, e apresentadas futuras especulações.

  4. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  5. Morphometric analysis - Cone beam computed tomography to predict bone quality and quantity.

    Science.gov (United States)

    Hohlweg-Majert, B; Metzger, M C; Kummer, T; Schulze, D

    2011-07-01

    Modified quantitative computed tomography is a method used to predict bone quality and quantify the bone mass of the jaw. The aim of this study was to determine whether bone quantity or quality was detected by cone beam computed tomography (CBCT) combined with image analysis. MATERIALS AND PROCEDURES: Different measurements recorded on two phantoms (Siemens phantom, Comac phantom) were evaluated on images taken with the Somatom VolumeZoom (Siemens Medical Solutions, Erlangen, Germany) and the NewTom 9000 (NIM s.r.l., Verona, Italy) in order to calculate a calibration curve. The spatial relationships of six sample cylinders and the repositioning from four pig skull halves relative to adjacent defined anatomical structures were assessed by means of three-dimensional visualization software. The calibration curves for computer tomography (CT) and cone beam computer tomography (CBCT) using the Siemens phantom showed linear correlation in both modalities between the Hounsfield Units (HU) and bone morphology. A correction factor for CBCT was calculated. Exact information about the micromorphology of the bone cylinders was only available using of micro computer tomography. Cone-beam computer tomography is a suitable choice for analysing bone mass, but, it does not give any information about bone quality. 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Prediction of detonation and JWL eos parameters of energetic materials using EXPLO5 computer code

    CSIR Research Space (South Africa)

    Peter, Xolani

    2016-09-01

    Full Text Available Ballistic Organization Cape Town, South Africa 27-29 September 2016 1 PREDICTION OF DETONATION AND JWL EOS PARAMETERS OF ENERGETIC MATERIALS USING EXPLO5 COMPUTER CODE X. Peter*, Z. Jiba, M. Olivier, I.M. Snyman, F.J. Mostert and T.J. Sono.... Nowadays many numerical methods and programs are being used for carrying out thermodynamic calculations of the detonation parameters of condensed explosives, for example a BKW Fortran (Mader, 1967), Ruby (Cowperthwaite and Zwisler, 1974) TIGER...

  7. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  8. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  9. Archaeology Through Computational Linguistics: Inscription Statistics Predict Excavation Sites of Indus Valley Artifacts.

    Science.gov (United States)

    Recchia, Gabriel L; Louwerse, Max M

    2016-11-01

    Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.

  10. The prediction in computer color matching of dentistry based on GA+BP neural network.

    Science.gov (United States)

    Li, Haisheng; Lai, Long; Chen, Li; Lu, Cheng; Cai, Qiang

    2015-01-01

    Although the use of computer color matching can reduce the influence of subjective factors by technicians, matching the color of a natural tooth with a ceramic restoration is still one of the most challenging topics in esthetic prosthodontics. Back propagation neural network (BPNN) has already been introduced into the computer color matching in dentistry, but it has disadvantages such as unstable and low accuracy. In our study, we adopt genetic algorithm (GA) to optimize the initial weights and threshold values in BPNN for improving the matching precision. To our knowledge, we firstly combine the BPNN with GA in computer color matching in dentistry. Extensive experiments demonstrate that the proposed method improves the precision and prediction robustness of the color matching in restorative dentistry.

  11. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  12. Prediction of surgical view of neurovascular decompression using interactive computer graphics.

    Science.gov (United States)

    Kin, Taichi; Oyama, Hiroshi; Kamada, Kyousuke; Aoki, Shigeki; Ohtomo, Kuni; Saito, Nobuhito

    2009-07-01

    To assess the value of an interactive visualization method for detecting the offending vessels in neurovascular compression syndrome in patients with facial spasm and trigeminal neuralgia. Computer graphics models are created by fusion of fast imaging employing steady-state acquisition and magnetic resonance angiography. High-resolution magnetic resonance angiography and fast imaging employing steady-state acquisition were performed preoperatively in 17 patients with neurovascular compression syndromes (facial spasm, n = 10; trigeminal neuralgia, n = 7) using a 3.0-T magnetic resonance imaging scanner. Computer graphics models were created with computer software and observed interactively for detection of offending vessels by rotation, enlargement, reduction, and retraction on a graphic workstation. Two-dimensional images were reviewed by 2 radiologists blinded to the clinical details, and 2 neurosurgeons predicted the offending vessel with the interactive visualization method before surgery. Predictions from the 2 imaging approaches were compared with surgical findings. The vessels identified during surgery were assumed to be the true offending vessels. Offending vessels were identified correctly in 16 of 17 patients (94%) using the interactive visualization method and in 10 of 17 patients using 2-dimensional images. These data demonstrated a significant difference (P = 0.015 by Fisher's exact method). The interactive visualization method data corresponded well with surgical findings (surgical field, offending vessels, and nerves). Virtual reality 3-dimensional computer graphics using fusion magnetic resonance angiography and fast imaging employing steady-state acquisition may be helpful for preoperative simulation.

  13. Predictive equations for lung volumes from computed tomography for size matching in pulmonary transplantation.

    Science.gov (United States)

    Konheim, Jeremy A; Kon, Zachary N; Pasrija, Chetan; Luo, Qingyang; Sanchez, Pablo G; Garcia, Jose P; Griffith, Bartley P; Jeudy, Jean

    2016-04-01

    Size matching for lung transplantation is widely accomplished using height comparisons between donors and recipients. This gross approximation allows for wide variation in lung size and, potentially, size mismatch. Three-dimensional computed tomography (3D-CT) volumetry comparisons could offer more accurate size matching. Although recipient CT scans are universally available, donor CT scans are rarely performed. Therefore, predicted donor lung volumes could be used for comparison to measured recipient lung volumes, but no such predictive equations exist. We aimed to use 3D-CT volumetry measurements from a normal patient population to generate equations for predicted total lung volume (pTLV), predicted right lung volume (pRLV), and predicted left lung volume (pLLV), for size-matching purposes. Chest CT scans of 400 normal patients were retrospectively evaluated. 3D-CT volumetry was performed to measure total lung volume, right lung volume, and left lung volume of each patient, and predictive equations were generated. The fitted model was tested in a separate group of 100 patients. The model was externally validated by comparison of total lung volume with total lung capacity from pulmonary function tests in a subset of those patients. Age, gender, height, and race were independent predictors of lung volume. In the test group, there were strong linear correlations between predicted and actual lung volumes measured by 3D-CT volumetry for pTLV (r = 0.72), pRLV (r = 0.72), and pLLV (r = 0.69). A strong linear correlation was also observed when comparing pTLV and total lung capacity (r = 0.82). We successfully created a predictive model for pTLV, pRLV, and pLLV. These may serve as reference standards and predict donor lung volume for size matching in lung transplantation. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  14. Prediction of infarction development after endovascular stroke therapy with dual-energy computed tomography.

    Science.gov (United States)

    Djurdjevic, Tanja; Rehwald, Rafael; Knoflach, Michael; Matosevic, Benjamin; Kiechl, Stefan; Gizewski, Elke Ruth; Glodny, Bernhard; Grams, Astrid Ellen

    2017-03-01

    After intraarterial recanalisation (IAR), the haemorrhage and the blood-brain barrier (BBB) disruption can be distinguished using dual-energy computed tomography (DECT). The aim of the present study was to investigate whether future infarction development can be predicted from DECT. DECT scans of 20 patients showing 45 BBB disrupted areas after IAR were assessed and compared with follow-up examinations. Receiver operator characteristic (ROC) analyses using densities from the iodine map (IM) and virtual non-contrast (VNC) were performed. Future infarction areas are denser than future non-infarction areas on IM series (23.44 ± 24.86 vs. 5.77 ± 2.77; p VNC series (29.71 ± 3.33 vs. 35.33 ± 3.50; p 17.13 HU; p VNC series allowed prediction of infarction volume. Future infarction development after IAR can be reliably predicted with the IM series. The prediction of haemorrhages and of infarction size is less reliable. • The IM series (DECT) can predict future infarction development after IAR. • Later haemorrhages can be predicted using the IM and the BW series. • The volume of definable hypodense areas in VNC correlates with infarction volume.

  15. Prediction of infarction development after endovascular stroke therapy with dual-energy computed tomography

    International Nuclear Information System (INIS)

    Djurdjevic, Tanja; Gizewski, Elke Ruth; Grams, Astrid Ellen; Rehwald, Rafael; Glodny, Bernhard; Knoflach, Michael; Matosevic, Benjamin; Kiechl, Stefan

    2017-01-01

    After intraarterial recanalisation (IAR), the haemorrhage and the blood-brain barrier (BBB) disruption can be distinguished using dual-energy computed tomography (DECT). The aim of the present study was to investigate whether future infarction development can be predicted from DECT. DECT scans of 20 patients showing 45 BBB disrupted areas after IAR were assessed and compared with follow-up examinations. Receiver operator characteristic (ROC) analyses using densities from the iodine map (IM) and virtual non-contrast (VNC) were performed. Future infarction areas are denser than future non-infarction areas on IM series (23.44 ± 24.86 vs. 5.77 ± 2.77; p < 0.0001) and more hypodense on VNC series (29.71 ± 3.33 vs. 35.33 ± 3.50; p < 0.0001). ROC analyses for the IM series showed an area under the curve (AUC) of 0.99 (cut-off: <9.97 HU; p < 0.05; sensitivity 91.18 %; specificity 100.00 %; accuracy 0.93) for the prediction of future infarctions. The AUC for the prediction of haemorrhagic infarctions was 0.78 (cut-off >17.13 HU; p < 0.05; sensitivity 90.00 %; specificity 62.86 %; accuracy 0.69). The VNC series allowed prediction of infarction volume. Future infarction development after IAR can be reliably predicted with the IM series. The prediction of haemorrhages and of infarction size is less reliable. (orig.)

  16. Prediction of infarction development after endovascular stroke therapy with dual-energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Djurdjevic, Tanja; Gizewski, Elke Ruth; Grams, Astrid Ellen [Medical University of Innsbruck, Department of Neuroradiology, Innsbruck (Austria); Rehwald, Rafael; Glodny, Bernhard [Medical University of Innsbruck, Department of Radiology, Innsbruck (Austria); Knoflach, Michael; Matosevic, Benjamin; Kiechl, Stefan [Medical University of Innsbruck, Department of Neurology, Innsbruck (Austria)

    2017-03-15

    After intraarterial recanalisation (IAR), the haemorrhage and the blood-brain barrier (BBB) disruption can be distinguished using dual-energy computed tomography (DECT). The aim of the present study was to investigate whether future infarction development can be predicted from DECT. DECT scans of 20 patients showing 45 BBB disrupted areas after IAR were assessed and compared with follow-up examinations. Receiver operator characteristic (ROC) analyses using densities from the iodine map (IM) and virtual non-contrast (VNC) were performed. Future infarction areas are denser than future non-infarction areas on IM series (23.44 ± 24.86 vs. 5.77 ± 2.77; p < 0.0001) and more hypodense on VNC series (29.71 ± 3.33 vs. 35.33 ± 3.50; p < 0.0001). ROC analyses for the IM series showed an area under the curve (AUC) of 0.99 (cut-off: <9.97 HU; p < 0.05; sensitivity 91.18 %; specificity 100.00 %; accuracy 0.93) for the prediction of future infarctions. The AUC for the prediction of haemorrhagic infarctions was 0.78 (cut-off >17.13 HU; p < 0.05; sensitivity 90.00 %; specificity 62.86 %; accuracy 0.69). The VNC series allowed prediction of infarction volume. Future infarction development after IAR can be reliably predicted with the IM series. The prediction of haemorrhages and of infarction size is less reliable. (orig.)

  17. A computational approach for thermomechanical fatigue life prediction of dissimilarly welded superheater tubes

    Energy Technology Data Exchange (ETDEWEB)

    Krishnasamy, Ram-Kumar; Seifert, Thomas; Siegele, Dieter [Fraunhofer-Institut fuer Werkstoffmechanik (IWM), Freiburg im Breisgau (Germany)

    2010-07-01

    In this paper a computational approach for fatigue life prediction of dissimilarly welded superheater tubes is presented and applied to a dissimilar weld between tubes made of the nickel base alloy Alloy617 tube and the 12% chromium steel VM12. The approach comprises the calculation of the residual stresses in the welded tubes with a multi-pass dissimilar welding simulation, the relaxation of the residual stresses in a post weld heat treatment (PWHT) simulation and the fatigue life prediction using the remaining residual stresses as initial condition. A cyclic fiscoplasticity model is used to calculate the transient stresses and strains under thermocyclic service loadings. The fatigue life is predicted with a damage parameter which is based on fracture mechanics. The adjustable parameters of the model are determined based on LCF and TMF experiments. The simulations show, that the residual stresses that remain after PWHT further relax in the first loading cycles. The predicted fatigue lives depend on the residual stresses and, thus, on the choice of the loading cycle in which the damage parameter is evaluated. It the first loading cycle, where residual stresses are still present, is considered, lower fatigue lives are predicted compared to predictions considering loading cycles with relaxed residual stresses. (orig.)

  18. The nature and use of prediction skills in a biological computer simulation

    Science.gov (United States)

    Lavoie, Derrick R.; Good, Ron

    The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.

  19. Reservoir computer predictions for the Three Meter magnetic field time evolution

    Science.gov (United States)

    Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.

    2017-12-01

    The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.

  20. Computer predictions on Rh-based double perovskites with unusual electronic and magnetic properties

    Science.gov (United States)

    Halder, Anita; Nafday, Dhani; Sanyal, Prabuddha; Saha-Dasgupta, Tanusri

    2018-03-01

    In search for new magnetic materials, we make computer prediction of structural, electronic and magnetic properties of yet-to-be synthesized Rh-based double perovskite compounds, Sr(Ca)2BRhO6 (B=Cr, Mn, Fe). We use combination of evolutionary algorithm, density functional theory, and statistical-mechanical tool for this purpose. We find that the unusual valence of Rh5+ may be stabilized in these compounds through formation of oxygen ligand hole. Interestingly, while the Cr-Rh and Mn-Rh compounds are predicted to be ferromagnetic half-metals, the Fe-Rh compounds are found to be rare examples of antiferromagnetic and metallic transition-metal oxide with three-dimensional electronic structure. The computed magnetic transition temperatures of the predicted compounds, obtained from finite temperature Monte Carlo study of the first principles-derived model Hamiltonian, are found to be reasonably high. The prediction of favorable growth condition of the compounds, reported in our study, obtained through extensive thermodynamic analysis should be useful for future synthesize of this interesting class of materials with intriguing properties.

  1. Computational prediction and experimental validation of Ciona intestinalis microRNA genes

    Directory of Open Access Journals (Sweden)

    Pasquinelli Amy E

    2007-11-01

    Full Text Available Abstract Background This study reports the first collection of validated microRNA genes in the sea squirt, Ciona intestinalis. MicroRNAs are processed from hairpin precursors to ~22 nucleotide RNAs that base pair to target mRNAs and inhibit expression. As a member of the subphylum Urochordata (Tunicata whose larval form has a notochord, the sea squirt is situated at the emergence of vertebrates, and therefore may provide information about the evolution of molecular regulators of early development. Results In this study, computational methods were used to predict 14 microRNA gene families in Ciona intestinalis. The microRNA prediction algorithm utilizes configurable microRNA sequence conservation and stem-loop specificity parameters, grouping by miRNA family, and phylogenetic conservation to the related species, Ciona savignyi. The expression for 8, out of 9 attempted, of the putative microRNAs in the adult tissue of Ciona intestinalis was validated by Northern blot analyses. Additionally, a target prediction algorithm was implemented, which identified a high confidence list of 240 potential target genes. Over half of the predicted targets can be grouped into the gene ontology categories of metabolism, transport, regulation of transcription, and cell signaling. Conclusion The computational techniques implemented in this study can be applied to other organisms and serve to increase the understanding of the origins of non-coding RNAs, embryological and cellular developmental pathways, and the mechanisms for microRNA-controlled gene regulatory networks.

  2. The neural correlates of problem states: testing FMRI predictions of a computational model of multitasking.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.

  3. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  4. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    Directory of Open Access Journals (Sweden)

    Ruijian Zhang

    2017-12-01

    Full Text Available Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the assessment of water quality will be far more efficient, and by generating the decision tree, the prediction will be quite accurate. The drawback of the machine learning modeling is that the execution takes quite long time, especially when we employ a better accuracy but more time-consuming algorithm in clustering. Therefore, we applied the high performance computing (HPC System to deal with this problem. Up to now, the pilot experiments have achieved very promising preliminary results. The visualized water quality assessment and prediction obtained from this project would be published in an interactive website so that the public and the environmental managers could use the information for their decision making.

  5. Development of computer code for determining prediction parameters of radionuclide migration in soil layer

    International Nuclear Information System (INIS)

    Ogawa, Hiromichi; Ohnuki, Toshihiko

    1986-07-01

    A computer code (MIGSTEM-FIT) has been developed to determine the prediction parameters, retardation factor, water flow velocity, dispersion coefficient, etc., of radionuclide migration in soil layer from the concentration distribution of radionuclide in soil layer or in effluent. In this code, the solution of the predicting equation for radionuclide migration is compared with the concentration distribution measured, and the most adequate values of parameter can be determined by the flexible tolerance method. The validity of finite differential method, which was one of the method to solve the predicting equation, was confirmed by comparison with the analytical solution, and also the validity of fitting method was confirmed by the fitting of the concentration distribution calculated from known parameters. From the examination about the error, it was found that the error of the parameter obtained by using this code was smaller than that of the concentration distribution measured. (author)

  6. Assessing the suitability of soft computing approaches for forest fires prediction

    Directory of Open Access Journals (Sweden)

    Samaher Al_Janabi

    2018-07-01

    Full Text Available Forest fires present one of the main causes of environmental hazards that have many negative results in different aspect of life. Therefore, early prediction, fast detection and rapid action are the key elements for controlling such phenomenon and saving lives. Through this work, 517 different entries were selected at different times for montesinho natural park (MNP in Portugal to determine the best predictor that has the ability to detect forest fires, The principle component analysis (PCA was applied to find the critical patterns and particle swarm optimization (PSO technique was used to segment the fire regions (clusters. In the next stage, five soft computing (SC Techniques based on neural network were used in parallel to identify the best technique that would potentially give more accurate and optimum results in predicting of forest fires, these techniques namely; cascade correlation network (CCN, multilayer perceptron neural network (MPNN, polynomial neural network (PNN, radial basis function (RBF and support vector machine (SVM In the final stage, the predictors and their performance were evaluated based on five quality measures including root mean squared error (RMSE, mean squared error (MSE, relative absolute error (RAE, mean absolute error (MAE and information gain (IG. The results indicate that SVM technique was more effective and efficient than the RBF, MPNN, PNN and CCN predictors. The results also show that the SVM algorithm provides more precise predictions compared with other predictors with small estimation error. The obtained results confirm that the SVM improves the prediction accuracy and suitable for forest fires prediction compared to other methods. Keywords: Forest fires, Soft computing, Prediction, Principle component analysis, Particle swarm optimization, Cascade correlation network, Multilayer perceptron neural network, Polynomial neural networks, Radial basis function, Support vector machine

  7. A cyber-linked undergraduate research experience in computational biomolecular structure prediction and design.

    Science.gov (United States)

    Alford, Rebecca F; Leaver-Fay, Andrew; Gonzales, Lynda; Dolan, Erin L; Gray, Jeffrey J

    2017-12-01

    Computational biology is an interdisciplinary field, and many computational biology research projects involve distributed teams of scientists. To accomplish their work, these teams must overcome both disciplinary and geographic barriers. Introducing new training paradigms is one way to facilitate research progress in computational biology. Here, we describe a new undergraduate program in biomolecular structure prediction and design in which students conduct research at labs located at geographically-distributed institutions while remaining connected through an online community. This 10-week summer program begins with one week of training on computational biology methods development, transitions to eight weeks of research, and culminates in one week at the Rosetta annual conference. To date, two cohorts of students have participated, tackling research topics including vaccine design, enzyme design, protein-based materials, glycoprotein modeling, crowd-sourced science, RNA processing, hydrogen bond networks, and amyloid formation. Students in the program report outcomes comparable to students who participate in similar in-person programs. These outcomes include the development of a sense of community and increases in their scientific self-efficacy, scientific identity, and science values, all predictors of continuing in a science research career. Furthermore, the program attracted students from diverse backgrounds, which demonstrates the potential of this approach to broaden the participation of young scientists from backgrounds traditionally underrepresented in computational biology.

  8. A cyber-linked undergraduate research experience in computational biomolecular structure prediction and design.

    Directory of Open Access Journals (Sweden)

    Rebecca F Alford

    2017-12-01

    Full Text Available Computational biology is an interdisciplinary field, and many computational biology research projects involve distributed teams of scientists. To accomplish their work, these teams must overcome both disciplinary and geographic barriers. Introducing new training paradigms is one way to facilitate research progress in computational biology. Here, we describe a new undergraduate program in biomolecular structure prediction and design in which students conduct research at labs located at geographically-distributed institutions while remaining connected through an online community. This 10-week summer program begins with one week of training on computational biology methods development, transitions to eight weeks of research, and culminates in one week at the Rosetta annual conference. To date, two cohorts of students have participated, tackling research topics including vaccine design, enzyme design, protein-based materials, glycoprotein modeling, crowd-sourced science, RNA processing, hydrogen bond networks, and amyloid formation. Students in the program report outcomes comparable to students who participate in similar in-person programs. These outcomes include the development of a sense of community and increases in their scientific self-efficacy, scientific identity, and science values, all predictors of continuing in a science research career. Furthermore, the program attracted students from diverse backgrounds, which demonstrates the potential of this approach to broaden the participation of young scientists from backgrounds traditionally underrepresented in computational biology.

  9. De-novo discovery of differentially abundant transcription factor binding sites including their positional preference.

    Science.gov (United States)

    Keilwagen, Jens; Grau, Jan; Paponov, Ivan A; Posch, Stefan; Strickert, Marc; Grosse, Ivo

    2011-02-10

    Transcription factors are a main component of gene regulation as they activate or repress gene expression by binding to specific binding sites in promoters. The de-novo discovery of transcription factor binding sites in target regions obtained by wet-lab experiments is a challenging problem in computational biology, which has not been fully solved yet. Here, we present a de-novo motif discovery tool called Dispom for finding differentially abundant transcription factor binding sites that models existing positional preferences of binding sites and adjusts the length of the motif in the learning process. Evaluating Dispom, we find that its prediction performance is superior to existing tools for de-novo motif discovery for 18 benchmark data sets with planted binding sites, and for a metazoan compendium based on experimental data from micro-array, ChIP-chip, ChIP-DSL, and DamID as well as Gene Ontology data. Finally, we apply Dispom to find binding sites differentially abundant in promoters of auxin-responsive genes extracted from Arabidopsis thaliana microarray data, and we find a motif that can be interpreted as a refined auxin responsive element predominately positioned in the 250-bp region upstream of the transcription start site. Using an independent data set of auxin-responsive genes, we find in genome-wide predictions that the refined motif is more specific for auxin-responsive genes than the canonical auxin-responsive element. In general, Dispom can be used to find differentially abundant motifs in sequences of any origin. However, the positional distribution learned by Dispom is especially beneficial if all sequences are aligned to some anchor point like the transcription start site in case of promoter sequences. We demonstrate that the combination of searching for differentially abundant motifs and inferring a position distribution from the data is beneficial for de-novo motif discovery. Hence, we make the tool freely available as a component of the open

  10. Evaluation of computer imaging technique for predicting the SPAD readings in potato leaves

    Directory of Open Access Journals (Sweden)

    M.S. Borhan

    2017-12-01

    Full Text Available Facilitating non-contact measurement, a computer-imaging system was devised and evaluated to predict the chlorophyll content in potato leaves. A charge-coupled device (CCD camera paired with two optical filters and light chamber was used to acquire green (550 ± 40 nm and red band (700 ± 40 nm images from the same leaf. Potato leaves from 15 plants differing in coloration (green to yellow and age were selected for this study. Histogram based image features, such as mean and variances of green and red band images, were extracted from the histogram. Regression analyses demonstrated that the variations in SPAD meter reading could be explained by the mean gray and variances of gray scale values. The fitted least square models based on the mean gray scale levels were inversely related to the chlorophyll content of the potato leaf with a R2 of 0.87 using a green band image and with an R2 of 0.79 using a red band image. With the extracted four image features, the developed multiple linear regression model predicted the chlorophyll content with a high R2 of 0.88. The multiple regression model (using all features provided an average prediction accuracy of 85.08% and a maximum accuracy of 99.8%. The prediction model using only mean gray value of red band showed an average accuracy of 81.6% with a maximum accuracy of 99.14%. Keywords: Computer imaging, Chlorophyll, SPAD meter, Regression, Prediction accuracy

  11. Computational Approaches for Prediction of Pathogen-Host Protein-Protein Interactions

    Directory of Open Access Journals (Sweden)

    Esmaeil eNourani

    2015-02-01

    Full Text Available Infectious diseases are still among the major and prevalent health problems, mostly because of the drug resistance of novel variants of pathogens. Molecular interactions between pathogens and their hosts are the key part of the infection mechanisms. Novel antimicrobial therapeutics to fight drug resistance is only possible in case of a thorough understanding of pathogen-host interaction (PHI systems. Existing databases, which contain experimentally verified PHI data, suffer from scarcity of reported interactions due to the technically challenging and time consuming process of experiments. This has motivated many researchers to address the problem by proposing computational approaches for analysis and prediction of PHIs. The computational methods primarily utilize sequence information, protein structure and known interactions. Classic machine learning techniques are used when there are sufficient known interactions to be used as training data. On the opposite case, transfer and multi task learning methods are preferred. Here, we present an overview of these computational approaches for PHI prediction, discussing their weakness and abilities, with future directions.

  12. Computational Prediction of MicroRNAs from Toxoplasma gondii Potentially Regulating the Hosts’ Gene Expression

    Directory of Open Access Journals (Sweden)

    Müşerref Duygu Saçar

    2014-10-01

    Full Text Available MicroRNAs (miRNAs were discovered two decades ago, yet there is still a great need for further studies elucidating their genesis and targeting in different phyla. Since experimental discovery and validation of miRNAs is difficult, computational predictions are indispensable and today most computational approaches employ machine learning. Toxoplasma gondii, a parasite residing within the cells of its hosts like human, uses miRNAs for its post-transcriptional gene regulation. It may also regulate its hosts’ gene expression, which has been shown in brain cancer. Since previous studies have shown that overexpressed miRNAs within the host are causal for disease onset, we hypothesized that T. gondii could export miRNAs into its host cell. We computationally predicted all hairpins from the genome of T. gondii and used mouse and human models to filter possible candidates. These were then further compared to known miRNAs in human and rodents and their expression was examined for T. gondii grown in mouse and human hosts, respectively. We found that among the millions of potential hairpins in T. gondii, only a few thousand pass filtering using a human or mouse model and that even fewer of those are expressed. Since they are expressed and differentially expressed in rodents and human, we suggest that there is a chance that T. gondii may export miRNAs into its hosts for direct regulation.

  13. User's Self-Prediction of Performance in Motor Imagery Brain-Computer Interface.

    Science.gov (United States)

    Ahn, Minkyu; Cho, Hohyun; Ahn, Sangtae; Jun, Sung C

    2018-01-01

    Performance variation is a critical issue in motor imagery brain-computer interface (MI-BCI), and various neurophysiological, psychological, and anatomical correlates have been reported in the literature. Although the main aim of such studies is to predict MI-BCI performance for the prescreening of poor performers, studies which focus on the user's sense of the motor imagery process and directly estimate MI-BCI performance through the user's self-prediction are lacking. In this study, we first test each user's self-prediction idea regarding motor imagery experimental datasets. Fifty-two subjects participated in a classical, two-class motor imagery experiment and were asked to evaluate their easiness with motor imagery and to predict their own MI-BCI performance. During the motor imagery experiment, an electroencephalogram (EEG) was recorded; however, no feedback on motor imagery was given to subjects. From EEG recordings, the offline classification accuracy was estimated and compared with several questionnaire scores of subjects, as well as with each subject's self-prediction of MI-BCI performance. The subjects' performance predictions during motor imagery task showed a high positive correlation ( r = 0.64, p performance even without feedback information. This implies that the human brain is an active learning system and, by self-experiencing the endogenous motor imagery process, it can sense and adopt the quality of the process. Thus, it is believed that users may be able to predict MI-BCI performance and results may contribute to a better understanding of low performance and advancing BCI.

  14. An integrated computational validation approach for potential novel miRNA prediction

    Directory of Open Access Journals (Sweden)

    Pooja Viswam

    2017-12-01

    Full Text Available MicroRNAs (miRNAs are short, non-coding RNAs between 17bp-24bp length that regulate gene expression by targeting mRNA molecules. The regulatory functions of miRNAs are known to be majorly associated with disease phenotypes such as cancer, cell signaling, cell division, growth and other metabolisms. Novel miRNAs are defined as sequences which does not have any similarity with the existing known sequences and void of any experimental evidences. In recent decades, the advent of next-generation sequencing allows us to capture the small RNA molecules form the cells and developing methods to estimate their expression levels. Several computational algorithms are available to predict the novel miRNAs from the deep sequencing data. In this work, we integrated three novel miRNA prediction programs miRDeep, miRanalyzer and miRPRo to compare and validate their prediction efficiency. The dicer cleavage sites, alignment density, seed conservation, minimum free energy, AU-GC percentage, secondary loop scores, false discovery rates and confidence scores will be considered for comparison and evaluation. Efficiency to identify isomiRs and base pair mismatches in a strand specific manner will also be considered for the computational validation. Further, the criteria and parameters for the identification of the best possible novel miRNA with minimal false positive rates were deduced.

  15. An experimental study on prediction of gallstone composition by ultrasonography and computed tomography

    International Nuclear Information System (INIS)

    Lee, Jong Beum; Chung, Sae Yul; Kim, Kun Sang; Lee, Yong Chul; Han, Man Chung; Kim, Jin Kyu

    1992-01-01

    Prediction of chemical composition of gallstones is a prerequisite in contemplating the chemical dissolution or extracorporeal shock wave lithotripsy of gallstones. The author retrospectively analysed the correlation between quantitative chemical composition of gallstones and their ultrasonographic and computed tomographic findings. The ultrasonography(US) and computed tomography(CT) of 100 consecutive stones obtained from 100 patients were performed under the in vitro condition. Their US and CT findings were grouped with certain pattern and each group was compared with the chemical composition of the stones. Stones with entirely discernible circumference and homogeneous internal echo on US had high bilirubin and low cholesterol content. Acoustic shadows were frequently absent with those stones. Stones with variable internal echo on US had relatively high cholesterol content but their distribution range were wide. There was no correlationship between the cholesterol content and the CT No. of the gallstones. There was positive correlationship between the calcium content and the CT No. of gallstones. The near totally calcified gallstones had very low cholesterol and high residue content. There was no relationship between the calcification type and the ultrasonographic pattern. In conclusion, those stones with entirely discernible circumference and homogeneous internal echo on US were pigment stones. On the contrary, stones with variable internal echo had relatively high cholesterol content. CT could predict the calcium content with CT No., but could not predict the cholesterol content

  16. Prediction of scour caused by 2D horizontal jets using soft computing techniques

    Directory of Open Access Journals (Sweden)

    Masoud Karbasi

    2017-12-01

    Full Text Available This paper presents application of five soft-computing techniques, artificial neural networks, support vector regression, gene expression programming, grouping method of data handling (GMDH neural network and adaptive-network-based fuzzy inference system, to predict maximum scour hole depth downstream of a sluice gate. The input parameters affecting the scour depth are the sediment size and its gradation, apron length, sluice gate opening, jet Froude number and the tail water depth. Six non-dimensional parameters were achieved to define a functional relationship between the input and output variables. Published data were used from the experimental researches. The results of soft-computing techniques were compared with empirical and regression based equations. The results obtained from the soft-computing techniques are superior to those of empirical and regression based equations. Comparison of soft-computing techniques showed that accuracy of the ANN model is higher than other models (RMSE = 0.869. A new GEP based equation was proposed.

  17. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  18. Chapter 6 – Computer-Aided Molecular Design and Property Prediction

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Zhang, L.; Kalakul, Sawitree

    2017-01-01

    for the initial stages of the design/development process. Therefore, computer-aided molecular design and property prediction techniques are two topics that play important roles in chemical product design, analysis, and application. In this chapter, an overview of the concepts, methods, and tools related......Today's society needs many chemical-based products for its survival, nutrition, health, transportation, agriculture, and the functioning of processes. Chemical-based products have to be designed/developed in order to meet these needs, while at the same time, they must be innovative and sustainable...... to these two topics are given. In addition, a generic computer-aided framework for the design of molecules, mixtures, and blends is presented. The application of the framework is highlighted for molecular products through two case studies involving the design of refrigerants and surfactants....

  19. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  20. Chest computed tomography scores are predictive of survival in patients with cystic fibrosis awaiting lung transplantation

    DEFF Research Database (Denmark)

    Loeve, Martine; Hop, Wim C. J.; de Bruijne, Marleen

    2012-01-01

    Rationale: Up to a third of cystic fibrosis (CF) patients awaiting lung transplantation (LTX) die while waiting. Inclusion of computed tomography (CT) scores may improve survival prediction models such as the lung allocation score (LAS). Objectives: This study investigated the association between...... CT and survival in CF patients screened for LTX. Methods: Clinical data and chest CTs of 411 CF patients screened for LTX between 1990 and 2005 were collected from 17 centers. CTs were scored with the Severe Advanced Lung Disease (SALD) 4-category scoring system, including the components "infection....../inflammation" (INF), air trapping/hypoperfusion (AT), normal/hyperperfusion (NOR) and bulla/cysts (BUL). The volume of each component was computed using semi-automated software. Survival analysis included Kaplan-Meier curves, and Cox-regression models. Measurements and main results: 366 (186 males) out of 411...

  1. Prediction of Clinical Outcome After Acute Ischemic Stroke: The Value of Repeated Noncontrast Computed Tomography, Computed Tomographic Angiography, and Computed Tomographic Perfusion.

    Science.gov (United States)

    Dankbaar, Jan W; Horsch, Alexander D; van den Hoven, Andor F; Kappelle, L Jaap; van der Schaaf, Irene C; van Seeters, Tom; Velthuis, Birgitta K

    2017-09-01

    Early prediction of outcome in acute ischemic stroke is important for clinical management. This study aimed to compare the relationship between early follow-up multimodality computed tomographic (CT) imaging and clinical outcome at 90 days in a large multicenter stroke study. From the DUST study (Dutch Acute Stroke Study), patients were selected with (1) anterior circulation occlusion on CT angiography (CTA) and ischemic deficit on CT perfusion (CTP) on admission, and (2) day 3 follow-up noncontrast CT, CTP, and CTA. Follow-up infarct volume on noncontrast CT, poor recanalization on CTA, and poor reperfusion on CTP (mean transit time index ≤75%) were related to unfavorable outcome after 90 days defined as modified Rankin Scale 3 to 6. Four multivariable models were constructed: (1) only baseline variables (model 1), (2) model 1 with addition of infarct volume, (3) model 1 with addition of recanalization, and (4) model 1 with addition of reperfusion. Area under the curves of the receiver operating characteristic curves of the models were compared using the DeLong test. A total of 242 patients were included. Poor recanalization was found in 21%, poor reperfusion in 37%, and unfavorable outcome in 44%. The area under the curve of the receiver operating characteristic curve without follow-up imaging was 0.81, with follow-up noncontrast CT 0.85 ( P =0.02), CTA 0.86 ( P =0.01), and CTP 0.86 ( P =0.01). All 3 follow-up imaging modalities improved outcome prediction compared with no imaging. There was no difference between the imaging models. Follow-up imaging after 3 days improves outcome prediction compared with prediction based on baseline variables alone. CTA recanalization and CTP reperfusion do not outperform noncontrast CT at this time point. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00880113. © 2017 American Heart Association, Inc.

  2. Impact of cone-beam computed tomography on implant planning and on prediction of implant size

    International Nuclear Information System (INIS)

    Pedroso, Ludmila Assuncao de Mello; Silva, Maria Alves Garcia Santos; Garcia, Robson Rodrigues; Leles, Jose Luiz Rodrigues; Leles, Claudio Rodrigues

    2013-01-01

    The aim was to investigate the impact of cone-beam computed tomography (CBCT) on implant planning and on prediction of final implant size. Consecutive patients referred for implant treatment were submitted to clinical examination, panoramic (PAN) radiography and a CBCT exam. Initial planning of implant length and width was assessed based on clinical and PAN exams, and final planning, on CBCT exam to complement diagnosis. The actual dimensions of the implants placed during surgery were compared with those obtained during initial and final planning, using the McNemmar test (p 0.05). It was concluded that CBCT improves the ability of predicting the actual implant length and reduces inaccuracy in surgical dental implant planning. (author)

  3. Cloud computing approaches for prediction of ligand binding poses and pathways.

    Science.gov (United States)

    Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S

    2015-01-22

    We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design.

  4. Computational prediction of muon stopping sites using ab initio random structure searching (AIRSS)

    Science.gov (United States)

    Liborio, Leandro; Sturniolo, Simone; Jochym, Dominik

    2018-04-01

    The stopping site of the muon in a muon-spin relaxation experiment is in general unknown. There are some techniques that can be used to guess the muon stopping site, but they often rely on approximations and are not generally applicable to all cases. In this work, we propose a purely theoretical method to predict muon stopping sites in crystalline materials from first principles. The method is based on a combination of ab initio calculations, random structure searching, and machine learning, and it has successfully predicted the MuT and MuBC stopping sites of muonium in Si, diamond, and Ge, as well as the muonium stopping site in LiF, without any recourse to experimental results. The method makes use of Soprano, a Python library developed to aid ab initio computational crystallography, that was publicly released and contains all the software tools necessary to reproduce our analysis.

  5. Seismic Response Prediction of Buildings with Base Isolation Using Advanced Soft Computing Approaches

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available Modeling response of structures under seismic loads is an important factor in Civil Engineering as it crucially affects the design and management of structures, especially for the high-risk areas. In this study, novel applications of advanced soft computing techniques are utilized for predicting the behavior of centrically braced frame (CBF buildings with lead-rubber bearing (LRB isolation system under ground motion effects. These techniques include least square support vector machine (LSSVM, wavelet neural networks (WNN, and adaptive neurofuzzy inference system (ANFIS along with wavelet denoising. The simulation of a 2D frame model and eight ground motions are considered in this study to evaluate the prediction models. The comparison results indicate that the least square support vector machine is superior to other techniques in estimating the behavior of smart structures.

  6. A computer model to predict temperatures and gas flows during AGR fuel handling

    International Nuclear Information System (INIS)

    Bishop, D.C.; Bowler, P.G.

    1986-01-01

    The paper describes the development of a comprehensive computer model (HOSTAGE) that has been developed for the Heysham II/Torness AGRs to predict temperature transients for all the important components during normal and fault conditions. It models not only the charge and discharge or fuel from an on-load reactor but also follows the fuel down the rest of the fuel route until it is dismantled. The main features of the physical model of gas and heat flow are described. Experimental results are used where appropriate and an indication will be given of how the predictions by HOSTAGE correlate with operating AGR reactors. The role of HOSTAGE in the Heysham II/Torness safety case is briefly discussed. (author)

  7. TRANSENERGY S: computer codes for coolant temperature prediction in LMFBR cores during transient events

    International Nuclear Information System (INIS)

    Glazer, S.; Todreas, N.; Rohsenow, W.; Sonin, A.

    1981-02-01

    This document is intended as a user/programmer manual for the TRANSENERGY-S computer code. The code represents an extension of the steady state ENERGY model, originally developed by E. Khan, to predict coolant and fuel pin temperatures in a single LMFBR core assembly during transient events. Effects which may be modelled in the analysis include temporal variation in gamma heating in the coolant and duct wall, rod power production, coolant inlet temperature, coolant flow rate, and thermal boundary conditions around the single assembly. Numerical formulations of energy equations in the fuel and coolant are presented, and the solution schemes and stability criteria are discussed. A detailed description of the input deck preparation is presented, as well as code logic flowcharts, and a complete program listing. TRANSENERGY-S code predictions are compared with those of two different versions of COBRA, and partial results of a 61 pin bundle test case are presented

  8. Towards a general theory of neural computation based on prediction by single neurons.

    Directory of Open Access Journals (Sweden)

    Christopher D Fiorillo

    Full Text Available Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise". A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of

  9. Modification in the FUDA computer code to predict fuel performance at high burnup

    Energy Technology Data Exchange (ETDEWEB)

    Das, M; Arunakumar, B V; Prasad, P N [Nuclear Power Corp., Mumbai (India)

    1997-08-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig.

  10. Computational Prediction of Blood-Brain Barrier Permeability Using Decision Tree Induction

    Directory of Open Access Journals (Sweden)

    Jörg Huwyler

    2012-08-01

    Full Text Available Predicting blood-brain barrier (BBB permeability is essential to drug development, as a molecule cannot exhibit pharmacological activity within the brain parenchyma without first transiting this barrier. Understanding the process of permeation, however, is complicated by a combination of both limited passive diffusion and active transport. Our aim here was to establish predictive models for BBB drug permeation that include both active and passive transport. A database of 153 compounds was compiled using in vivo surface permeability product (logPS values in rats as a quantitative parameter for BBB permeability. The open source Chemical Development Kit (CDK was used to calculate physico-chemical properties and descriptors. Predictive computational models were implemented by machine learning paradigms (decision tree induction on both descriptor sets. Models with a corrected classification rate (CCR of 90% were established. Mechanistic insight into BBB transport was provided by an Ant Colony Optimization (ACO-based binary classifier analysis to identify the most predictive chemical substructures. Decision trees revealed descriptors of lipophilicity (aLogP and charge (polar surface area, which were also previously described in models of passive diffusion. However, measures of molecular geometry and connectivity were found to be related to an active drug transport component.

  11. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS.

    Science.gov (United States)

    Regenbogen, Sam; Wilkins, Angela D; Lichtarge, Olivier

    2016-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses.

  12. Modification in the FUDA computer code to predict fuel performance at high burnup

    International Nuclear Information System (INIS)

    Das, M.; Arunakumar, B.V.; Prasad, P.N.

    1997-01-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig

  13. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Preliminary Computational Analysis of the (HIRENASD) Configuration in Preparation for the Aeroelastic Prediction Workshop

    Science.gov (United States)

    Chwalowski, Pawel; Florance, Jennifer P.; Heeg, Jennifer; Wieseman, Carol D.; Perry, Boyd P.

    2011-01-01

    This paper presents preliminary computational aeroelastic analysis results generated in preparation for the first Aeroelastic Prediction Workshop (AePW). These results were produced using FUN3D software developed at NASA Langley and are compared against the experimental data generated during the HIgh REynolds Number Aero- Structural Dynamics (HIRENASD) Project. The HIRENASD wind-tunnel model was tested in the European Transonic Windtunnel in 2006 by Aachen University0s Department of Mechanics with funding from the German Research Foundation. The computational effort discussed here was performed (1) to obtain a preliminary assessment of the ability of the FUN3D code to accurately compute physical quantities experimentally measured on the HIRENASD model and (2) to translate the lessons learned from the FUN3D analysis of HIRENASD into a set of initial guidelines for the first AePW, which includes test cases for the HIRENASD model and its experimental data set. This paper compares the computational and experimental results obtained at Mach 0.8 for a Reynolds number of 7 million based on chord, corresponding to the HIRENASD test conditions No. 132 and No. 159. Aerodynamic loads and static aeroelastic displacements are compared at two levels of the grid resolution. Harmonic perturbation numerical results are compared with the experimental data using the magnitude and phase relationship between pressure coefficients and displacement. A dynamic aeroelastic numerical calculation is presented at one wind-tunnel condition in the form of the time history of the generalized displacements. Additional FUN3D validation results are also presented for the AGARD 445.6 wing data set. This wing was tested in the Transonic Dynamics Tunnel and is commonly used in the preliminary benchmarking of computational aeroelastic software.

  15. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  16. A Public Trial De Novo

    DEFF Research Database (Denmark)

    Vedel, Jane Bjørn; Gad, Christopher

    2011-01-01

    This article addresses the concept of “industrial interests” and examines its role in a topical controversy about a large research grant from a private foundation, the Novo Nordisk Foundation, to the University of Copenhagen. The authors suggest that the debate took the form of a “public trial” w.......” The article ends with a discussion of some implications of the analysis, including that policy making, academic research, and public debates might benefit from more detailed accounts of interests and stakes.......This article addresses the concept of “industrial interests” and examines its role in a topical controversy about a large research grant from a private foundation, the Novo Nordisk Foundation, to the University of Copenhagen. The authors suggest that the debate took the form of a “public trial......” where the grant and close(r) intermingling between industry and public research was prosecuted and defended. First, the authors address how the grant was framed in the media. Second, they redescribe the case by introducing new “evidence” that, because of this framing, did not reach “the court...

  17. De novo transcriptome sequencing and digital gene expression analysis predict biosynthetic pathway of rhynchophylline and isorhynchophylline from Uncaria rhynchophylla, a non-model plant with potent anti-alzheimer's properties.

    Science.gov (United States)

    Guo, Qianqian; Ma, Xiaojun; Wei, Shugen; Qiu, Deyou; Wilson, Iain W; Wu, Peng; Tang, Qi; Liu, Lijun; Dong, Shoukun; Zu, Wei

    2014-08-12

    The major medicinal alkaloids isolated from Uncaria rhynchophylla (gouteng in chinese) capsules are rhynchophylline (RIN) and isorhynchophylline (IRN). Extracts containing these terpene indole alkaloids (TIAs) can inhibit the formation and destabilize preformed fibrils of amyloid β protein (a pathological marker of Alzheimer's disease), and have been shown to improve the cognitive function of mice with Alzheimer-like symptoms. The biosynthetic pathways of RIN and IRN are largely unknown. In this study, RNA-sequencing of pooled Uncaria capsules RNA samples taken at three developmental stages that accumulate different amount of RIN and IRN was performed. More than 50 million high-quality reads from a cDNA library were generated and de novo assembled. Sequences for all of the known enzymes involved in TIAs synthesis were identified. Additionally, 193 cytochrome P450 (CYP450), 280 methyltransferase and 144 isomerase genes were identified, that are potential candidates for enzymes involved in RIN and IRN synthesis. Digital gene expression profile (DGE) analysis was performed on the three capsule developmental stages, and based on genes possessing expression profiles consistent with RIN and IRN levels; four CYP450s, three methyltransferases and three isomerases were identified as the candidates most likely to be involved in the later steps of RIN and IRN biosynthesis. A combination of de novo transcriptome assembly and DGE analysis was shown to be a powerful method for identifying genes encoding enzymes potentially involved in the biosynthesis of important secondary metabolites in a non-model plant. The transcriptome data from this study provides an important resource for understanding the formation of major bioactive constituents in the capsule extract from Uncaria, and provides information that may aid in metabolic engineering to increase yields of these important alkaloids.

  18. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    Science.gov (United States)

    Lu, Lu; Yu, Hua

    2018-05-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  19. Binding Mode and Induced Fit Predictions for Prospective Computational Drug Design.

    Science.gov (United States)

    Grebner, Christoph; Iegre, Jessica; Ulander, Johan; Edman, Karl; Hogner, Anders; Tyrchan, Christian

    2016-04-25

    Computer-aided drug design plays an important role in medicinal chemistry to obtain insights into molecular mechanisms and to prioritize design strategies. Although significant improvement has been made in structure based design, it still remains a key challenge to accurately model and predict induced fit mechanisms. Most of the current available techniques either do not provide sufficient protein conformational sampling or are too computationally demanding to fit an industrial setting. The current study presents a systematic and exhaustive investigation of predicting binding modes for a range of systems using PELE (Protein Energy Landscape Exploration), an efficient and fast protein-ligand sampling algorithm. The systems analyzed (cytochrome P, kinase, protease, and nuclear hormone receptor) exhibit different complexities of ligand induced fit mechanisms and protein dynamics. The results are compared with results from classical molecular dynamics simulations and (induced fit) docking. This study shows that ligand induced side chain rearrangements and smaller to medium backbone movements are captured well in PELE. Large secondary structure rearrangements, however, remain challenging for all employed techniques. Relevant binding modes (ligand heavy atom RMSD PELE method within a few hours of simulation, positioning PELE as a tool applicable for rapid drug design cycles.

  20. Is there any role of positron emission tomography computed tomography for predicting resectability of gallbladder cancer?

    Science.gov (United States)

    Kim, Jaihwan; Ryu, Ji Kon; Kim, Chulhan; Paeng, Jin Chul; Kim, Yong-Tae

    2014-05-01

    The role of integrated (18)F-2-fluoro-2-deoxy-D-glucose positron emission tomography computed tomography (PET-CT) is uncertain in gallbladder cancer. The aim of this study was to show the role of PET-CT in gallbladder cancer patients. Fifty-three patients with gallbladder cancer underwent preoperative computed tomography (CT) and PET-CT scans. Their medical records were retrospectively reviewed. Twenty-six patients underwent resection. Based on the final outcomes, PET-CT was in good agreement (0.61 to 0.80) with resectability whereas CT was in acceptable agreement (0.41 to 0.60) with resectability. When the diagnostic accuracy of the predictions for resectability was calculated with the ROC curve, the accuracy of PET-CT was higher than that of CT in patients who underwent surgical resection (P=0.03), however, there was no difference with all patients (P=0.12). CT and PET-CT had a discrepancy in assessing curative resection in nine patients. These consisted of two false negative and four false positive CT results (11.3%) and three false negative PET-CT results (5.1%). PET-CT was in good agreement with the final outcomes compared to CT. As a complementary role of PEC-CT to CT, PET-CT tended to show better prediction about resectability than CT, especially due to unexpected distant metastasis.

  1. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    Science.gov (United States)

    Lu, Lu; Yu, Hua

    2018-04-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  2. Motivation and emotion predict medical students' attention to computer-based feedback.

    Science.gov (United States)

    Naismith, Laura M; Lajoie, Susanne P

    2017-12-14

    Students cannot learn from feedback unless they pay attention to it. This study investigated relationships between the personal factors of achievement goal orientations, achievement emotions, and attention to feedback in BioWorld, a computer environment for learning clinical reasoning. Novice medical students (N = 28) completed questionnaires to measure their achievement goal orientations and then thought aloud while solving three endocrinology patient cases and reviewing corresponding expert solutions. Questionnaires administered after each case measured participants' experiences of five feedback emotions: pride, relief, joy, shame, and anger. Attention to individual text segments of the expert solutions was modelled using logistic regression and the method of generalized estimating equations. Participants did not attend to all of the feedback that was available to them. Performance-avoidance goals and shame positively predicted attention to feedback, and performance-approach goals and relief negatively predicted attention to feedback. Aspects of how the feedback was displayed also influenced participants' attention. Findings are discussed in terms of their implications for educational theory as well as the design and use of computer learning environments in medical education.

  3. A comparative analysis among computational intelligence techniques for dissolved oxygen prediction in Delaware River

    Directory of Open Access Journals (Sweden)

    Ehsan Olyaie

    2017-05-01

    Full Text Available Most of the water quality models previously developed and used in dissolved oxygen (DO prediction are complex. Moreover, reliable data available to develop/calibrate new DO models is scarce. Therefore, there is a need to study and develop models that can handle easily measurable parameters of a particular site, even with short length. In recent decades, computational intelligence techniques, as effective approaches for predicting complicated and significant indicator of the state of aquatic ecosystems such as DO, have created a great change in predictions. In this study, three different AI methods comprising: (1 two types of artificial neural networks (ANN namely multi linear perceptron (MLP and radial based function (RBF; (2 an advancement of genetic programming namely linear genetic programming (LGP; and (3 a support vector machine (SVM technique were used for DO prediction in Delaware River located at Trenton, USA. For evaluating the performance of the proposed models, root mean square error (RMSE, Nash–Sutcliffe efficiency coefficient (NS, mean absolute relative error (MARE and, correlation coefficient statistics (R were used to choose the best predictive model. The comparison of estimation accuracies of various intelligence models illustrated that the SVM was able to develop the most accurate model in DO estimation in comparison to other models. Also, it was found that the LGP model performs better than the both ANNs models. For example, the determination coefficient was 0.99 for the best SVM model, while it was 0.96, 0.91 and 0.81 for the best LGP, MLP and RBF models, respectively. In general, the results indicated that an SVM model could be employed satisfactorily in DO estimation.

  4. Quantitative computed tomography versus spirometry in predicting air leak duration after major lung resection for cancer.

    Science.gov (United States)

    Ueda, Kazuhiro; Kaneda, Yoshikazu; Sudo, Manabu; Mitsutaka, Jinbo; Li, Tao-Sheng; Suga, Kazuyoshi; Tanaka, Nobuyuki; Hamano, Kimikazu

    2005-11-01

    Emphysema is a well-known risk factor for developing air leak or persistent air leak after pulmonary resection. Although quantitative computed tomography (CT) and spirometry are used to diagnose emphysema, it remains controversial whether these tests are predictive of the duration of postoperative air leak. Sixty-two consecutive patients who were scheduled to undergo major lung resection for cancer were enrolled in this prospective study to define the best predictor of postoperative air leak duration. Preoperative factors analyzed included spirometric variables and area of emphysema (proportion of the low-attenuation area) that was quantified in a three-dimensional CT lung model. Chest tubes were removed the day after disappearance of the air leak, regardless of pleural drainage. Univariate and multivariate proportional hazards analyses were used to determine the influence of preoperative factors on chest tube time (air leak duration). By univariate analysis, site of resection (upper, lower), forced expiratory volume in 1 second, predicted postoperative forced expiratory volume in 1 second, and area of emphysema ( 10%) were significant predictors of air leak duration. By multivariate analysis, site of resection and area of emphysema were the best independent determinants of air leak duration. The results were similar for patients with a smoking history (n = 40), but neither forced expiratory volume in 1 second nor predicted postoperative forced expiratory volume in 1 second were predictive of air leak duration. Quantitative CT is superior to spirometry in predicting air leak duration after major lung resection for cancer. Quantitative CT may aid in the identification of patients, particularly among those with a smoking history, requiring additional preventive procedures against air leak.

  5. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    International Nuclear Information System (INIS)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin; Hollingsworth, Alan B.; Qian, Wei

    2015-01-01

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy

  6. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin, E-mail: Bin.Zheng-1@ou.edu [School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States); Hollingsworth, Alan B. [Mercy Women’s Center, Mercy Health Center, Oklahoma City, Oklahoma 73120 (United States); Qian, Wei [Department of Electrical and Computer Engineering, University of Texas, El Paso, Texas 79968 (United States)

    2015-11-15

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.

  7. Towards early software reliability prediction for computer forensic tools (case study).

    Science.gov (United States)

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  8. Computational prediction and molecular confirmation of Helitron transposons in the maize genome

    Directory of Open Access Journals (Sweden)

    He Limei

    2008-01-01

    Full Text Available Abstract Background Helitrons represent a new class of transposable elements recently uncovered in plants and animals. One remarkable feature of Helitrons is their ability to capture gene sequences, which makes them of considerable potential evolutionary importance. However, because Helitrons lack the typical structural features of other DNA transposable elements, identifying them is a challenge. Currently, most researchers identify Helitrons manually by comparing sequences. With the maize whole genome sequencing project underway, an automated computational Helitron searching tool is needed. The characterization of Helitron activities in maize needs to be addressed in order to better understand the impact of Helitrons on the organization of the genome. Results We developed and implemented a heuristic searching algorithm in PERL for identifying Helitrons. Our HelitronFinder program will (i take FASTA-formatted DNA sequences as input and identify the hairpin looping patterns, and (ii exploit the consensus 5' and 3' end sequences of known Helitrons to identify putative ends. We randomly selected five predicted Helitrons from the program's high quality output for molecular verification. Four out of the five predicted Helitrons were confirmed by PCR assays and DNA sequencing in different maize inbred lines. The HelitronFinder program identified two head-to-head dissimilar Helitrons in a maize BAC sequence. Conclusion We have identified 140 new Helitron candidates in maize with our computational tool HelitronFinder by searching maize DNA sequences currently available in GenBank. Four out of five candidates were confirmed to be real by empirical methods, thus validating the predictions of HelitronFinder. Additional points to emerge from our study are that Helitrons do not always insert at an AT dinucleotide in the host sequences, that they can insert immediately adjacent to an existing Helitron, and that their movement may cause changes in the flanking

  9. Computational intelligence models to predict porosity of tablets using minimum features

    Directory of Open Access Journals (Sweden)

    Khalid MH

    2017-01-01

    Full Text Available Mohammad Hassan Khalid,1 Pezhman Kazemi,1 Lucia Perez-Gandarillas,2 Abderrahim Michrafy,2 Jakub Szlęk,1 Renata Jachowicz,1 Aleksander Mendyk1 1Department of Pharmaceutical Technology and Biopharmaceutics, Faculty of Pharmacy, Jagiellonian University Medical College, Krakow, Poland; 2Centre National de la Recherche Scientifique, Centre RAPSODEE, Mines Albi, Université de Toulouse, Albi, France Abstract: The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD practices. Computational intelligence (CI offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs, and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC (in percentage, granule size fraction (in micrometers, and die compaction force (in kilonewtons as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1% and symbolic regression (NRMSE =4% as the best-performing methods, also exhibiting reliable predictive

  10. PREDICTING ATTENUATION OF VIRUSES DURING PERCOLATION IN SOILS: 2. USER'S GUIDE TO THE VIRULO 1.0 COMPUTER MODEL

    Science.gov (United States)

    In the EPA document Predicting Attenuation of Viruses During Percolation in Soils 1. Probabilistic Model the conceptual, theoretical, and mathematical foundations for a predictive screening model were presented. In this current volume we present a User's Guide for the computer mo...

  11. A comparison of computational models with and without genotyping for prediction of response to second-line HIV therapy

    NARCIS (Netherlands)

    Revell, A. D.; Boyd, M. A.; Wang, D.; Emery, S.; Gazzard, B.; Reiss, P.; van Sighem, A. I.; Montaner, J. S.; Lane, H. C.; Larder, B. A.

    2014-01-01

    We compared the use of computational models developed with and without HIV genotype vs. genotyping itself to predict effective regimens for patients experiencing first-line virological failure. Two sets of models predicted virological response for 99 three-drug regimens for patients on a failing

  12. Impact of cone-beam computed tomography on implant planning and on prediction of implant size

    Energy Technology Data Exchange (ETDEWEB)

    Pedroso, Ludmila Assuncao de Mello; Silva, Maria Alves Garcia Santos, E-mail: ludmilapedroso@hotmail.com [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Fac. de Odontologia; Garcia, Robson Rodrigues [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Fac. de Odontologia. Dept. de Medicina Oral; Leles, Jose Luiz Rodrigues [Universidade Paulista (UNIP), Goiania, GO (Brazil). Fac. de Odontologia. Dept. de Cirurgia; Leles, Claudio Rodrigues [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Fac. de Odontologia. Dept. de Prevencao e Reabilitacao Oral

    2013-11-15

    The aim was to investigate the impact of cone-beam computed tomography (CBCT) on implant planning and on prediction of final implant size. Consecutive patients referred for implant treatment were submitted to clinical examination, panoramic (PAN) radiography and a CBCT exam. Initial planning of implant length and width was assessed based on clinical and PAN exams, and final planning, on CBCT exam to complement diagnosis. The actual dimensions of the implants placed during surgery were compared with those obtained during initial and final planning, using the McNemmar test (p < 0.05). The final sample comprised 95 implants in 27 patients, distributed over the maxilla and mandible. Agreement in implant length was 50.5% between initial and final planning, and correct prediction of the actual implant length was 40.0% and 69.5%, using PAN and CBCT exams, respectively. Agreement in implant width assessment ranged from 69.5% to 73.7%. A paired comparison of the frequency of changes between initial or final planning and implant placement (McNemmar test) showed greater frequency of changes in initial planning for implant length (p < 0.001), but not for implant width (p = 0.850). The frequency of changes was not influenced by implant location at any stage of implant planning (chi-square test, p > 0.05). It was concluded that CBCT improves the ability of predicting the actual implant length and reduces inaccuracy in surgical dental implant planning. (author)

  13. COBRA: A Computational Brewing Application for Predicting the Molecular Composition of Organic Aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Fooshee, David R.; Nguyen, Tran B.; Nizkorodov, Sergey A.; Laskin, Julia; Laskin, Alexander; Baldi, Pierre

    2012-05-08

    Atmospheric organic aerosols (OA) represent a significant fraction of airborne particulate matter and can impact climate, visibility, and human health. These mixtures are difficult to characterize experimentally due to the enormous complexity and dynamic nature of their chemical composition. We introduce a novel Computational Brewing Application (COBRA) and apply it to modeling oligomerization chemistry stemming from condensation and addition reactions of monomers pertinent to secondary organic aerosol (SOA) formed by photooxidation of isoprene. COBRA uses two lists as input: a list of chemical structures comprising the molecular starting pool, and a list of rules defining potential reactions between molecules. Reactions are performed iteratively, with products of all previous iterations serving as reactants for the next one. The simulation generated thousands of molecular structures in the mass range of 120-500 Da, and correctly predicted ~70% of the individual SOA constituents observed by high-resolution mass spectrometry (HR-MS). Selected predicted structures were confirmed with tandem mass spectrometry. Esterification and hemiacetal formation reactions were shown to play the most significant role in oligomer formation, whereas aldol condensation was shown to be insignificant. COBRA is not limited to atmospheric aerosol chemistry, but is broadly applicable to the prediction of reaction products in other complex mixtures for which reasonable reaction mechanisms and seed molecules can be supplied by experimental or theoretical methods.

  14. IAMBUS, a computer code for the design and performance prediction of fast breeder fuel rods

    International Nuclear Information System (INIS)

    Toebbe, H.

    1990-05-01

    IAMBUS is a computer code for the thermal and mechanical design, in-pile performance prediction and post-irradiation analysis of fast breeder fuel rods. The code deals with steady, non-steady and transient operating conditions and enables to predict in-pile behavior of fuel rods in power reactors as well as in experimental rigs. Great effort went into the development of a realistic account of non-steady fuel rod operating conditions. The main emphasis is placed on characterizing the mechanical interaction taking place between the cladding tube and the fuel as a result of contact pressure and friction forces, with due consideration of axial and radial crack configuration within the fuel as well as the gradual transition at the elastic/plastic interface in respect to fuel behavior. IAMBUS can be readily adapted to various fuel and cladding materials. The specific models and material correlations of the reference version deal with the actual in-pile behavior and physical properties of the KNK II and SNR 300 related fuel rod design, confirmed by comparison of the fuel performance model with post-irradiation data. The comparison comprises steady, non-steady and transient irradiation experiments within the German/Belgian fuel rod irradiation program. The code is further validated by comparison of model predictions with post-irradiation data of standard fuel and breeder rods of Phenix and PFR as well as selected LWR fuel rods in non-steady operating conditions

  15. Computational Prediction of Atomic Structures of Helical Membrane Proteins Aided by EM Maps

    Science.gov (United States)

    Kovacs, Julio A.; Yeager, Mark; Abagyan, Ruben

    2007-01-01

    Integral membrane proteins pose a major challenge for protein-structure prediction because only ≈100 high-resolution structures are available currently, thereby impeding the development of rules or empirical potentials to predict the packing of transmembrane α-helices. However, when an intermediate-resolution electron microscopy (EM) map is available, it can be used to provide restraints which, in combination with a suitable computational protocol, make structure prediction feasible. In this work we present such a protocol, which proceeds in three stages: 1), generation of an ensemble of α-helices by flexible fitting into each of the density rods in the low-resolution EM map, spanning a range of rotational angles around the main helical axes and translational shifts along the density rods; 2), fast optimization of side chains and scoring of the resulting conformations; and 3), refinement of the lowest-scoring conformations with internal coordinate mechanics, by optimizing the van der Waals, electrostatics, hydrogen bonding, torsional, and solvation energy contributions. In addition, our method implements a penalty term through a so-called tethering map, derived from the EM map, which restrains the positions of the α-helices. The protocol was validated on three test cases: GpA, KcsA, and MscL. PMID:17496035

  16. Blinded prospective evaluation of computer-based mechanistic schizophrenia disease model for predicting drug response.

    Directory of Open Access Journals (Sweden)

    Hugo Geerts

    Full Text Available The tremendous advances in understanding the neurobiological circuits involved in schizophrenia have not translated into more effective treatments. An alternative strategy is to use a recently published 'Quantitative Systems Pharmacology' computer-based mechanistic disease model of cortical/subcortical and striatal circuits based upon preclinical physiology, human pathology and pharmacology. The physiology of 27 relevant dopamine, serotonin, acetylcholine, norepinephrine, gamma-aminobutyric acid (GABA and glutamate-mediated targets is calibrated using retrospective clinical data on 24 different antipsychotics. The model was challenged to predict quantitatively the clinical outcome in a blinded fashion of two experimental antipsychotic drugs; JNJ37822681, a highly selective low-affinity dopamine D(2 antagonist and ocaperidone, a very high affinity dopamine D(2 antagonist, using only pharmacology and human positron emission tomography (PET imaging data. The model correctly predicted the lower performance of JNJ37822681 on the positive and negative syndrome scale (PANSS total score and the higher extra-pyramidal symptom (EPS liability compared to olanzapine and the relative performance of ocaperidone against olanzapine, but did not predict the absolute PANSS total score outcome and EPS liability for ocaperidone, possibly due to placebo responses and EPS assessment methods. Because of its virtual nature, this modeling approach can support central nervous system research and development by accounting for unique human drug properties, such as human metabolites, exposure, genotypes and off-target effects and can be a helpful tool for drug discovery and development.

  17. Targeted intervention: Computational approaches to elucidate and predict relapse in alcoholism.

    Science.gov (United States)

    Heinz, Andreas; Deserno, Lorenz; Zimmermann, Ulrich S; Smolka, Michael N; Beck, Anne; Schlagenhauf, Florian

    2017-05-01

    Alcohol use disorder (AUD) and addiction in general is characterized by failures of choice resulting in repeated drug intake despite severe negative consequences. Behavioral change is hard to accomplish and relapse after detoxification is common and can be promoted by consumption of small amounts of alcohol as well as exposure to alcohol-associated cues or stress. While those environmental factors contributing to relapse have long been identified, the underlying psychological and neurobiological mechanism on which those factors act are to date incompletely understood. Based on the reinforcing effects of drugs of abuse, animal experiments showed that drug, cue and stress exposure affect Pavlovian and instrumental learning processes, which can increase salience of drug cues and promote habitual drug intake. In humans, computational approaches can help to quantify changes in key learning mechanisms during the development and maintenance of alcohol dependence, e.g. by using sequential decision making in combination with computational modeling to elucidate individual differences in model-free versus more complex, model-based learning strategies and their neurobiological correlates such as prediction error signaling in fronto-striatal circuits. Computational models can also help to explain how alcohol-associated cues trigger relapse: mechanisms such as Pavlovian-to-Instrumental Transfer can quantify to which degree Pavlovian conditioned stimuli can facilitate approach behavior including alcohol seeking and intake. By using generative models of behavioral and neural data, computational approaches can help to quantify individual differences in psychophysiological mechanisms that underlie the development and maintenance of AUD and thus promote targeted intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Passive Stretch Induces Structural and Functional Maturation of Engineered Heart Muscle as Predicted by Computational Modeling.

    Science.gov (United States)

    Abilez, Oscar J; Tzatzalos, Evangeline; Yang, Huaxiao; Zhao, Ming-Tao; Jung, Gwanghyun; Zöllner, Alexander M; Tiburcy, Malte; Riegler, Johannes; Matsa, Elena; Shukla, Praveen; Zhuge, Yan; Chour, Tony; Chen, Vincent C; Burridge, Paul W; Karakikes, Ioannis; Kuhl, Ellen; Bernstein, Daniel; Couture, Larry A; Gold, Joseph D; Zimmermann, Wolfram H; Wu, Joseph C

    2018-02-01

    The ability to differentiate human pluripotent stem cells (hPSCs) into cardiomyocytes (CMs) makes them an attractive source for repairing injured myocardium, disease modeling, and drug testing. Although current differentiation protocols yield hPSC-CMs to >90% efficiency, hPSC-CMs exhibit immature characteristics. With the goal of overcoming this limitation, we tested the effects of varying passive stretch on engineered heart muscle (EHM) structural and functional maturation, guided by computational modeling. Human embryonic stem cells (hESCs, H7 line) or human induced pluripotent stem cells (IMR-90 line) were differentiated to hPSC-derived cardiomyocytes (hPSC-CMs) in vitro using a small molecule based protocol. hPSC-CMs were characterized by troponin + flow cytometry as well as electrophysiological measurements. Afterwards, 1.2 × 10 6 hPSC-CMs were mixed with 0.4 × 10 6 human fibroblasts (IMR-90 line) (3:1 ratio) and type-I collagen. The blend was cast into custom-made 12-mm long polydimethylsiloxane reservoirs to vary nominal passive stretch of EHMs to 5, 7, or 9 mm. EHM characteristics were monitored for up to 50 days, with EHMs having a passive stretch of 7 mm giving the most consistent formation. Based on our initial macroscopic observations of EHM formation, we created a computational model that predicts the stress distribution throughout EHMs, which is a function of cellular composition, cellular ratio, and geometry. Based on this predictive modeling, we show cell alignment by immunohistochemistry and coordinated calcium waves by calcium imaging. Furthermore, coordinated calcium waves and mechanical contractions were apparent throughout entire EHMs. The stiffness and active forces of hPSC-derived EHMs are comparable with rat neonatal cardiomyocyte-derived EHMs. Three-dimensional EHMs display increased expression of mature cardiomyocyte genes including sarcomeric protein troponin-T, calcium and potassium ion channels, β-adrenergic receptors, and t

  19. Computational predictions of damage propagation preceding dissection of ascending thoracic aortic aneurysms.

    Science.gov (United States)

    Mousavi, S Jamaleddin; Farzaneh, Solmaz; Avril, Stéphane

    2018-04-01

    Dissections of ascending thoracic aortic aneurysms (ATAAs) cause significant morbidity and mortality worldwide. They occur when a tear in the intima-media of the aorta permits the penetration of the blood and the subsequent delamination and separation of the wall in 2 layers, forming a false channel. To predict computationally the risk of tear formation, stress analyses should be performed layer-specifically and they should consider internal or residual stresses that exist in the tissue. In the present paper, we propose a novel layer-specific damage model based on the constrained mixture theory, which intrinsically takes into account these internal stresses and can predict appropriately the tear formation. The model is implemented in finite-element commercial software Abaqus coupled with user material subroutine. Its capability is tested by applying it to the simulation of different exemplary situations, going from in vitro bulge inflation experiments on aortic samples to in vivo overpressurizing of patient-specific ATAAs. The simulations reveal that damage correctly starts from the intimal layer (luminal side) and propagates across the media as a tear but never hits the adventitia. This scenario is typically the first stage of development of an acute dissection, which is predicted for pressures of about 2.5 times the diastolic pressure by the model after calibrating the parameters against experimental data performed on collected ATAA samples. Further validations on a larger cohort of patients should hopefully confirm the potential of the model in predicting patient-specific damage evolution and possible risk of dissection during aneurysm growth for clinical applications. Copyright © 2017 John Wiley & Sons, Ltd.

  20. A linear programming computational framework integrates phosphor-proteomics and prior knowledge to predict drug efficacy.

    Science.gov (United States)

    Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei

    2017-12-21

    In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.

  1. Predicted cancer risks induced by computed tomography examinations during childhood, by a quantitative risk assessment approach.

    Science.gov (United States)

    Journy, Neige; Ancelet, Sophie; Rehel, Jean-Luc; Mezzarobba, Myriam; Aubert, Bernard; Laurier, Dominique; Bernier, Marie-Odile

    2014-03-01

    The potential adverse effects associated with exposure to ionizing radiation from computed tomography (CT) in pediatrics must be characterized in relation to their expected clinical benefits. Additional epidemiological data are, however, still awaited for providing a lifelong overview of potential cancer risks. This paper gives predictions of potential lifetime risks of cancer incidence that would be induced by CT examinations during childhood in French routine practices in pediatrics. Organ doses were estimated from standard radiological protocols in 15 hospitals. Excess risks of leukemia, brain/central nervous system, breast and thyroid cancers were predicted from dose-response models estimated in the Japanese atomic bomb survivors' dataset and studies of medical exposures. Uncertainty in predictions was quantified using Monte Carlo simulations. This approach predicts that 100,000 skull/brain scans in 5-year-old children would result in eight (90 % uncertainty interval (UI) 1-55) brain/CNS cancers and four (90 % UI 1-14) cases of leukemia and that 100,000 chest scans would lead to 31 (90 % UI 9-101) thyroid cancers, 55 (90 % UI 20-158) breast cancers, and one (90 % UI risks without exposure). Compared to background risks, radiation-induced risks would be low for individuals throughout life, but relative risks would be highest in the first decades of life. Heterogeneity in the radiological protocols across the hospitals implies that 5-10 % of CT examinations would be related to risks 1.4-3.6 times higher than those for the median doses. Overall excess relative risks in exposed populations would be 1-10 % depending on the site of cancer and the duration of follow-up. The results emphasize the potential risks of cancer specifically from standard CT examinations in pediatrics and underline the necessity of optimization of radiological protocols.

  2. Computer model predicting breakthrough febrile urinary tract infection in children with primary vesicoureteral reflux.

    Science.gov (United States)

    Arlen, Angela M; Alexander, Siobhan E; Wald, Moshe; Cooper, Christopher S

    2016-10-01

    Factors influencing the decision to surgically correct vesicoureteral reflux (VUR) include risk of breakthrough febrile urinary tract infection (fUTI) or renal scarring, and decreased likelihood of spontaneous resolution. Improved identification of children at risk for recurrent fUTI may impact management decisions, and allow for more individualized VUR management. We have developed and investigated the accuracy of a multivariable computational model to predict probability of breakthrough fUTI in children with primary VUR. Children with primary VUR and detailed clinical and voiding cystourethrogram (VCUG) data were identified. Patient demographics, VCUG findings including grade, laterality, and bladder volume at onset of VUR, UTI history, presence of bladder-bowel dysfunction (BBD), and breakthrough fUTI were assessed. The VCUG dataset was randomized into a training set of 288 with a separate representational cross-validation set of 96. Various model types and architectures were investigated using neUROn++, a set of C++ programs. Two hundred fifty-five children (208 girls, 47 boys) diagnosed with primary VUR at a mean age of 3.1 years (±2.6) met all inclusion criteria. A total 384 VCUGs were analyzed. Median follow-up was 24 months (interquartile range 12-52 months). Sixty-eight children (26.7%) experienced 90 breakthrough fUTI events. Dilating VUR, reflux occurring at low bladder volumes, BBD, and history of multiple infections/fUTI were associated with breakthrough fUTI (Table). A 2-hidden node neural network model had the best fit with a receiver operating characteristic curve area of 0.755 for predicting breakthrough fUTI. The risk of recurrent febrile infections, renal parenchymal scarring, and likelihood of spontaneous resolution, as well as parental preference all influence management of primary VUR. The genesis of UTI is multifactorial, making precise prediction of an individual child's risk of breakthrough fUTI challenging. Demonstrated risk factors for

  3. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng

    2018-02-06

    Experimental determination of membrane protein (MP) structures is challenging as they are often too large for nuclear magnetic resonance (NMR) experiments and difficult to crystallize. Currently there are only about 510 non-redundant MPs with solved structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology and secondary structure, two-dimensional (2D) prediction of the contact/distance map, together with three-dimensional (3D) modeling of the MP structure in the lipid bilayer, for each MP target from a given model organism. The precision of the computationally constructed MP structures is leveraged by state-of-the-art deep learning methods as well as cutting-edge modeling strategies. In particular, (i) we annotate 1D property via DeepCNF (Deep Convolutional Neural Fields) that not only models complex sequence-structure relationship but also interdependency between adjacent property labels; (ii) we predict 2D contact/distance map through Deep Transfer Learning which learns the patterns as well as the complex relationship between contacts/distances and protein features from non-membrane proteins; and (iii) we model 3D structure by feeding its predicted contacts and secondary structure to the Crystallography & NMR System (CNS) suite combined with a membrane burial potential that is residue-specific and depth-dependent. PredMP currently contains more than 2,200 multi-pass transmembrane proteins (length<700 residues) from Human. These transmembrane proteins are classified according to IUPHAR/BPS Guide, which provides a hierarchical organization of receptors, channels, transporters, enzymes and other drug targets according to their molecular relationships and physiological functions. Among these MPs, we estimated that our approach could predict correct folds for 1

  4. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  5. Rationale and design of the PREDICT (Plaque Registration and Evaluation Detected In Computed Tomography) registry.

    Science.gov (United States)

    Yamamoto, Hideya; Awai, Kazuo; Kuribayashi, Sachio; Kihara, Yasuki

    2014-01-01

    At least two-thirds of cases of acute coronary syndrome are caused by disruption of an atherosclerotic plaque. The natural history of individual plaques is unknown and needs to be established. The Plaque Registration and Evaluation Detected In Computed Tomography (PREDICT) registry is a prospective, multicenter, longitudinal, observational registry. This registry was designed to examine the relationships among coronary CT angiography (CTA) findings and clinical findings, mortality, and morbidity. The relationships among progression of coronary atherosclerosis, including changes in plaque characteristics on coronary CTA, and serum lipid levels and modification of coronary risk factors will also be evaluated. From October 2009 to December 2012, 3015 patients who underwent coronary CTA in 29 centers in Japan were enrolled. These patients were followed for 2 years. The primary end points were considered as all-cause mortality and major cardiac events, including cardiac death, nonfatal myocardial infarction, and unstable angina that required hospitalization. The secondary end points were heart failure that required administration of diuretics, target vessel revascularization, cerebral infarction, peripheral arterial disease, and invasive coronary angiography. Blood pressure, serum lipid, and C-reactive protein levels and all cardiovascular events were recorded at 1 and 2 years. If the initial coronary CTA showed any stenosis or plaques, follow-up coronary CTA was scheduled at 2 years to determine changes in coronary lesions, including changes in plaque characteristics. Analysis of the PREDICT registry data will clarify the relationships between coronary CTA findings and cardiovascular mortality and morbidity in a collaborative multicenter fashion. This trial is registered at www.clinicaltrials.gov as NCT 00991835. Copyright © 2014 Society of Cardiovascular Computed Tomography. All rights reserved.

  6. Operational mesoscale atmospheric dispersion prediction using high performance parallel computing cluster for emergency response

    International Nuclear Information System (INIS)

    Srinivas, C.V.; Venkatesan, R.; Muralidharan, N.V.; Das, Someshwar; Dass, Hari; Eswara Kumar, P.

    2005-08-01

    An operational atmospheric dispersion prediction system is implemented on a cluster super computer for 'Online Emergency Response' for Kalpakkam nuclear site. The numerical system constitutes a parallel version of a nested grid meso-scale meteorological model MM5 coupled to a random walk particle dispersion model FLEXPART. The system provides 48 hour forecast of the local weather and radioactive plume dispersion due to hypothetical air borne releases in a range of 100 km around the site. The parallel code was implemented on different cluster configurations like distributed and shared memory systems. Results of MM5 run time performance for 1-day prediction are reported on all the machines available for testing. A reduction of 5 times in runtime is achieved using 9 dual Xeon nodes (18 physical/36 logical processors) compared to a single node sequential run. Based on the above run time results a cluster computer facility with 9-node Dual Xeon is commissioned at IGCAR for model operation. The run time of a triple nested domain MM5 is about 4 h for 24 h forecast. The system has been operated continuously for a few months and results were ported on the IMSc home page. Initial and periodic boundary condition data for MM5 are provided by NCMRWF, New Delhi. An alternative source is found to be NCEP, USA. These two sources provide the input data to the operational models at different spatial and temporal resolutions and using different assimilation methods. A comparative study on the results of forecast is presented using these two data sources for present operational use. Slight improvement is noticed in rainfall, winds, geopotential heights and the vertical atmospheric structure while using NCEP data probably because of its high spatial and temporal resolution. (author)

  7. APPLICATION OF SOFT COMPUTING TECHNIQUES FOR PREDICTING COOLING TIME REQUIRED DROPPING INITIAL TEMPERATURE OF MASS CONCRETE

    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai

    2017-07-01

    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  8. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons

    Directory of Open Access Journals (Sweden)

    Steinbiss Sascha

    2012-11-01

    Full Text Available Abstract Background Long terminal repeat (LTR retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets, making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining

  9. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons.

    Science.gov (United States)

    Steinbiss, Sascha; Kastens, Sascha; Kurtz, Stefan

    2012-11-07

    Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR

  10. Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs

    Science.gov (United States)

    Drewnowski, Jakub; Zaborowska, Ewa; Hernandez De Vega, Carmen

    2018-02-01

    Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.

  11. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  12. Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs

    Directory of Open Access Journals (Sweden)

    Drewnowski Jakub

    2018-01-01

    Full Text Available Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.

  13. High Altitude Balloon Flight Path Prediction and Site Selection Based On Computer Simulations

    Science.gov (United States)

    Linford, Joel

    2010-10-01

    Interested in the upper atmosphere, Weber State University Physics department has developed a High Altitude Reconnaissance Balloon for Outreach and Research team, also known as HARBOR. HARBOR enables Weber State University to take a variety of measurements from ground level to altitudes as high as 100,000 feet. The flight paths of these balloons can extend as long as 100 miles from the launch zone, making the choice of where and when to fly critical. To ensure the ability to recover the packages in a reasonable amount of time, days and times are carefully selected using computer simulations limiting flight tracks to approximately 40 miles from the launch zone. The computer simulations take atmospheric data collected by National Oceanic and Atmospheric Administration (NOAA) to plot what flights might have looked like in the past, and to predict future flights. Using these simulations a launch zone has been selected in Duchesne Utah, which has hosted eight successful flights over the course of the last three years, all of which have been recovered. Several secondary launch zones in western Wyoming, Southern Idaho, and Northern Utah are also being considered.

  14. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    Science.gov (United States)

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some

  15. Metabolic flexibility of mitochondrial respiratory chain disorders predicted by computer modelling.

    Science.gov (United States)

    Zieliński, Łukasz P; Smith, Anthony C; Smith, Alexander G; Robinson, Alan J

    2016-11-01

    Mitochondrial respiratory chain dysfunction causes a variety of life-threatening diseases affecting about 1 in 4300 adults. These diseases are genetically heterogeneous, but have the same outcome; reduced activity of mitochondrial respiratory chain complexes causing decreased ATP production and potentially toxic accumulation of metabolites. Severity and tissue specificity of these effects varies between patients by unknown mechanisms and treatment options are limited. So far most research has focused on the complexes themselves, and the impact on overall cellular metabolism is largely unclear. To illustrate how computer modelling can be used to better understand the potential impact of these disorders and inspire new research directions and treatments, we simulated them using a computer model of human cardiomyocyte mitochondrial metabolism containing over 300 characterised reactions and transport steps with experimental parameters taken from the literature. Overall, simulations were consistent with patient symptoms, supporting their biological and medical significance. These simulations predicted: complex I deficiencies could be compensated using multiple pathways; complex II deficiencies had less metabolic flexibility due to impacting both the TCA cycle and the respiratory chain; and complex III and IV deficiencies caused greatest decreases in ATP production with metabolic consequences that parallel hypoxia. Our study demonstrates how results from computer models can be compared to a clinical phenotype and used as a tool for hypothesis generation for subsequent experimental testing. These simulations can enhance understanding of dysfunctional mitochondrial metabolism and suggest new avenues for research into treatment of mitochondrial disease and other areas of mitochondrial dysfunction. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Experimental predictions drawn from a computational model of sign-trackers and goal-trackers.

    Science.gov (United States)

    Lesaint, Florian; Sigaud, Olivier; Clark, Jeremy J; Flagel, Shelly B; Khamassi, Mehdi

    2015-01-01

    Gaining a better understanding of the biological mechanisms underlying the individual variation observed in response to rewards and reward cues could help to identify and treat individuals more prone to disorders of impulsive control, such as addiction. Variation in response to reward cues is captured in rats undergoing autoshaping experiments where the appearance of a lever precedes food delivery. Although no response is required for food to be delivered, some rats (goal-trackers) learn to approach and avidly engage the magazine until food delivery, whereas other rats (sign-trackers) come to approach and engage avidly the lever. The impulsive and often maladaptive characteristics of the latter response are reminiscent of addictive behaviour in humans. In a previous article, we developed a computational model accounting for a set of experimental data regarding sign-trackers and goal-trackers. Here we show new simulations of the model to draw experimental predictions that could help further validate or refute the model. In particular, we apply the model to new experimental protocols such as injecting flupentixol locally into the core of the nucleus accumbens rather than systemically, and lesioning of the core of the nucleus accumbens before or after conditioning. In addition, we discuss the possibility of removing the food magazine during the inter-trial interval. The predictions from this revised model will help us better understand the role of different brain regions in the behaviours expressed by sign-trackers and goal-trackers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Predicting hollow viscus injury in blunt abdominal trauma with computed tomography.

    Science.gov (United States)

    Bhagvan, Savitha; Turai, Matthew; Holden, Andrew; Ng, Alexander; Civil, Ian

    2013-01-01

    Evaluation of blunt abdominal trauma is controversial. Computed tomography (CT) of the abdomen is commonly used but has limitations, especially in excluding hollow viscus injury in the presence of solid organ injury. To determine whether CT reports alone could be used to direct operative treatment in abdominal trauma, this study was undertaken. The trauma database at Auckland City Hospital was accessed for patients who had abdominal CT and subsequent laparotomy during a five-year period. The CT scans were reevaluated by a consultant radiologist who was blinded to operative findings. The CT findings were correlated with the operative findings. Between January 2002 and December 2007, 1,250 patients were evaluated for blunt abdominal injury with CT. A subset of 78 patients underwent laparotomy, and this formed the study group. The sensitivity and specificity of CT scan in predicting hollow viscus injury was 55.33 and 92.06 % respectively. The positive and negative predictive values were 61.53 and 89.23 % respectively. Presence of free fluid in CT scan was sensitive in diagnosing hollow viscus injury (90 %). Specific findings for hollow viscus injuries on CT scan were free intraperitoneal air (93 %), retroperitoneal air (100 %), oral contrast extravasation (100 %), bowel wall defect (98 %), patchy bowel enhancement (97 %), and mesenteric abnormality (94 %). CT alone cannot be used as a screening tool for hollow viscus injury. The decision to operate in hollow viscus injury has to be based on mechanism of injury and clinical findings together with radiological evidence.

  18. Prediction of Separation Length of Turbulent Multiphase Flow Using Radiotracer and Computational Fluid Dynamics Simulation

    International Nuclear Information System (INIS)

    Sugiharto, S.; Kurniadi, R.; Abidin, Z.; Stegowski, Z.; Furman, L.

    2013-01-01

    Multiphase flow modeling presents great challenges due to its extreme importance in various industrial and environmental applications. In the present study, prediction of separation length of multiphase flow is examined experimentally by injection of two kinds of iodine-based radiotracer solutions into a hydrocarbon transport pipeline (HCT) having an inner diameter of 24 in (60,96 m). The main components of fluids in the pipeline are water 95%, crude oil 3% and gas 2%. A radiotracing experiment was carried out at the segment of pipe which is located far from branch points with assumptions that stratified flows in such segment were achieved. Two radiation detectors located at 80 and 100 m from injection point were used to generate residence time distribution (RTD) curve resulting from injection of radiotracer solutions. Multiphase computational fluid dynamics (CFD) simulations using Eulerian-Eulerian control volume and commercial CFD package Fluent 6.2 were employed to simulate separation length of multiphase flow. The results of study shows that the flow velocity of water is higher than the flow rate of crude oil in water-dominated system despite the higher density of water than the density of the crude oil. The separation length in multiphase flow predicted by Fluent mixture model is approximately 20 m, measured from injection point. This result confirms that the placement of the first radiation detector at the distance 80 m from the injection point was correct. (author)

  19. Prediction of Separation Length of Turbulent Multiphase Flow Using Radiotracer and Computational Fluid Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    S. Sugiharto1

    2013-04-01

    Full Text Available Multiphase flow modeling presents great challenges due to its extreme importance in various industrial and environmental applications. In the present study, prediction of separation length of multiphase flow is examined experimentally by injection of two kinds of iodine-based radiotracer solutions into a hydrocarbon transport pipeline (HCT having an inner diameter of 24 in (60,96 m. The main components of fluids in the pipeline are water 95%, crude oil 3% and gas 2%. A radiotracing experiment was carried out at the segment of pipe which is located far from branch points with assumptions that stratified flows in such segment were achieved. Two radiation detectors located at 80 and 100 m from injection point were used to generate residence time distribution (RTD curve resulting from injection of radiotracer solutions. Multiphase computational fluid dynamics (CFD simulations using Eulerian-Eulerian control volume and commercial CFD package Fluent 6.2 were employed to simulate separation length of multiphase flow. The results of study shows that the flow velocity of water is higher than the flow rate of crude oil in water-dominated system despite the higher density of water than the density of the crude oil. The separation length in multiphase flow predicted by Fluent mixture model is approximately 20 m, measured from injection point. This result confirms that the placement of the first radiation detector at the distance 80 m from the injection point was correct

  20. An appraisal of wind speed distribution prediction by soft computing methodologies: A comparative study

    International Nuclear Information System (INIS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Saboohi, Hadi; Abdul Wahab, Ainuddin Wahid; Protić, Milan; Zalnezhad, Erfan; Mirhashemi, Seyed Mohammad Amin

    2014-01-01

    Highlights: • Probabilistic distribution functions of wind speed. • Two parameter Weibull probability distribution. • To build an effective prediction model of distribution of wind speed. • Support vector regression application as probability function for wind speed. - Abstract: The probabilistic distribution of wind speed is among the more significant wind characteristics in examining wind energy potential and the performance of wind energy conversion systems. When the wind speed probability distribution is known, the wind energy distribution can be easily obtained. Therefore, the probability distribution of wind speed is a very important piece of information required in assessing wind energy potential. For this reason, a large number of studies have been established concerning the use of a variety of probability density functions to describe wind speed frequency distributions. Although the two-parameter Weibull distribution comprises a widely used and accepted method, solving the function is very challenging. In this study, the polynomial and radial basis functions (RBF) are applied as the kernel function of support vector regression (SVR) to estimate two parameters of the Weibull distribution function according to previously established analytical methods. Rather than minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound, so as to achieve generalized performance. According to the experimental results, enhanced predictive accuracy and capability of generalization can be achieved using the SVR approach compared to other soft computing methodologies

  1. Computational fluid dynamics (CFD) using porous media modeling predicts recurrence after coiling of cerebral aneurysms.

    Science.gov (United States)

    Umeda, Yasuyuki; Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori

    2017-01-01

    This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01-1.11; P = 0.016). The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms.

  2. Osteoporosis prediction from the mandible using cone-beam computed tomography

    International Nuclear Information System (INIS)

    Barngkgei, Imad; Al Haffar, Iyad; Khattab, Razan

    2014-01-01

    This study aimed to evaluate the use of dental cone-beam computed tomography (CBCT) in the diagnosis of osteoporosis among menopausal and postmenopausal women by using only a CBCT viewer program. Thirty-eight menopausal and postmenopausal women who underwent dual-energy X-ray absorptiometry (DXA) examination for hip and lumbar vertebrae were scanned using CBCT (field of view: 13 cmx15 cm; voxel size: 0.25 mm). Slices from the body of the mandible as well as the ramus were selected and some CBCT-derived variables, such as radiographic density (RD) as gray values, were calculated as gray values. Pearson's correlation, one-way analysis of variance (ANOVA), and accuracy (sensitivity and specificity) evaluation based on linear and logistic regression were performed to choose the variable that best correlated with the lumbar and femoral neck T-scores. RD of the whole bone area of the mandible was the variable that best correlated with and predicted both the femoral neck and the lumbar vertebrae T-scores; further, Pearson's correlation coefficients were 0.5/0.6 (p value=0.037/0.009). The sensitivity, specificity, and accuracy based on the logistic regression were 50%, 88.9%, and 78.4%, respectively, for the femoral neck, and 46.2%, 91.3%, and 75%, respectively, for the lumbar vertebrae. Lumbar vertebrae and femoral neck osteoporosis can be predicted with high accuracy from the RD value of the body of the mandible by using a CBCT viewer program.

  3. Prediction of Elastic Constants of the Fuzzy Fibre Reinforced Polymer Using Computational Micromechanics

    Science.gov (United States)

    Pawlik, Marzena; Lu, Yiling

    2018-05-01

    Computational micromechanics is a useful tool to predict properties of carbon fibre reinforced polymers. In this paper, a representative volume element (RVE) is used to investigate a fuzzy fibre reinforced polymer. The fuzzy fibre results from the introduction of nanofillers in the fibre surface. The composite being studied contains three phases, namely: the T650 carbon fibre, the carbon nanotubes (CNTs) reinforced interphase and the epoxy resin EPIKOTE 862. CNTs are radially grown on the surface of the carbon fibre, and thus resultant interphase composed of nanotubes and matrix is transversely isotropic. Transversely isotropic properties of the interphase are numerically implemented in the ANSYS FEM software using element orientation command. Obtained numerical predictions are compared with the available analytical models. It is found that the CNTs interphase significantly increased the transverse mechanical properties of the fuzzy fibre reinforced polymer. This extent of enhancement changes monotonically with the carbon fibre volume fraction. This RVE model enables to investigate different orientation of CNTs in the fuzzy fibre model.

  4. Computational tools for experimental determination and theoretical prediction of protein structure

    Energy Technology Data Exchange (ETDEWEB)

    O`Donoghue, S.; Rost, B.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. The authors intend to review the state of the art in the experimental determination of protein 3D structure (focus on nuclear magnetic resonance), and in the theoretical prediction of protein function and of protein structure in 1D, 2D and 3D from sequence. All the atomic resolution structures determined so far have been derived from either X-ray crystallography (the majority so far) or Nuclear Magnetic Resonance (NMR) Spectroscopy (becoming increasingly more important). The authors briefly describe the physical methods behind both of these techniques; the major computational methods involved will be covered in some detail. They highlight parallels and differences between the methods, and also the current limitations. Special emphasis will be given to techniques which have application to ab initio structure prediction. Large scale sequencing techniques increase the gap between the number of known proteins sequences and that of known protein structures. They describe the scope and principles of methods that contribute successfully to closing that gap. Emphasis will be given on the specification of adequate testing procedures to validate such methods.

  5. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  6. Icarus: visualizer for de novo assembly evaluation.

    Science.gov (United States)

    Mikheenko, Alla; Valin, Gleb; Prjibelski, Andrey; Saveliev, Vladislav; Gurevich, Alexey

    2016-11-01

    : Data visualization plays an increasingly important role in NGS data analysis. With advances in both sequencing and computational technologies, it has become a new bottleneck in genomics studies. Indeed, evaluation of de novo genome assemblies is one of the areas that can benefit from the visualization. However, even though multiple quality assessment methods are now available, existing visualization tools are hardly suitable for this purpose. Here, we present Icarus-a novel genome visualizer for accurate assessment and analysis of genomic draft assemblies, which is based on the tool QUAST. Icarus can be used in studies where a related reference genome is available, as well as for non-model organisms. The tool is available online and as a standalone application. http://cab.spbu.ru/software/icarus CONTACT: aleksey.gurevich@spbu.ruSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. UniNovo: a universal tool for de novo peptide sequencing.

    Science.gov (United States)

    Jeong, Kyowon; Kim, Sangtae; Pevzner, Pavel A

    2013-08-15

    Mass spectrometry (MS) instruments and experimental protocols are rapidly advancing, but de novo peptide sequencing algorithms to analyze tandem mass (MS/MS) spectra are lagging behind. Although existing de novo sequencing tools perform well on certain types of spectra [e.g. Collision Induced Dissociation (CID) spectra of tryptic peptides], their performance often deteriorates on other types of spectra, such as Electron Transfer Dissociation (ETD), Higher-energy Collisional Dissociation (HCD) spectra or spectra of non-tryptic digests. Thus, rather than developing a new algorithm for each type of spectra, we develop a universal de novo sequencing algorithm called UniNovo that works well for all types of spectra or even for spectral pairs (e.g. CID/ETD spectral pairs). UniNovo uses an improved scoring function that captures the dependences between different ion types, where such dependencies are learned automatically using a modified offset frequency function. The performance of UniNovo is compared with PepNovo+, PEAKS and pNovo using various types of spectra. The results show that the performance of UniNovo is superior to other tools for ETD spectra and superior or comparable with others for CID and HCD spectra. UniNovo also estimates the probability that each reported reconstruction is correct, using simple statistics that are readily obtained from a small training dataset. We demonstrate that the estimation is accurate for all tested types of spectra (including CID, HCD, ETD, CID/ETD and HCD/ETD spectra of trypsin, LysC or AspN digested peptides). UniNovo is implemented in JAVA and tested on Windows, Ubuntu and OS X machines. UniNovo is available at http://proteomics.ucsd.edu/Software/UniNovo.html along with the manual.

  8. Automated prediction of tissue outcome after acute ischemic stroke in computed tomography perfusion images

    Science.gov (United States)

    Vos, Pieter C.; Bennink, Edwin; de Jong, Hugo; Velthuis, Birgitta K.; Viergever, Max A.; Dankbaar, Jan Willem

    2015-03-01

    Assessment of the extent of cerebral damage on admission in patients with acute ischemic stroke could play an important role in treatment decision making. Computed tomography perfusion (CTP) imaging can be used to determine the extent of damage. However, clinical application is hindered by differences among vendors and used methodology. As a result, threshold based methods and visual assessment of CTP images has not yet shown to be useful in treatment decision making and predicting clinical outcome. Preliminary results in MR studies have shown the benefit of using supervised classifiers for predicting tissue outcome, but this has not been demonstrated for CTP. We present a novel method for the automatic prediction of tissue outcome by combining multi-parametric CTP images into a tissue outcome probability map. A supervised classification scheme was developed to extract absolute and relative perfusion values from processed CTP images that are summarized by a trained classifier into a likelihood of infarction. Training was performed using follow-up CT scans of 20 acute stroke patients with complete recanalization of the vessel that was occluded on admission. Infarcted regions were annotated by expert neuroradiologists. Multiple classifiers were evaluated in a leave-one-patient-out strategy for their discriminating performance using receiver operating characteristic (ROC) statistics. Results showed that a RandomForest classifier performed optimally with an area under the ROC of 0.90 for discriminating infarct tissue. The obtained results are an improvement over existing thresholding methods and are in line with results found in literature where MR perfusion was used.

  9. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  10. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  11. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  12. Computational models for residual creep life prediction of power plant components

    International Nuclear Information System (INIS)

    Grewal, G.S.; Singh, A.K.; Ramamoortry, M.

    2006-01-01

    All high temperature - high pressure power plant components are prone to irreversible visco-plastic deformation by the phenomenon of creep. The steady state creep response as well as the total creep life of a material is related to the operational component temperature through, respectively, the exponential and inverse exponential relationships. Minor increases in the component temperature can thus have serious consequences as far as the creep life and dimensional stability of a plant component are concerned. In high temperature steam tubing in power plants, one mechanism by which a significant temperature rise can occur is by the growth of a thermally insulating oxide film on its steam side surface. In the present paper, an elegantly simple and computationally efficient technique is presented for predicting the residual creep life of steel components subjected to continual steam side oxide film growth. Similarly, fabrication of high temperature power plant components involves extensive use of welding as the fabrication process of choice. Naturally, issues related to the creep life of weldments have to be seriously addressed for safe and continual operation of the welded plant component. Unfortunately, a typical weldment in an engineering structure is a zone of complex microstructural gradation comprising of a number of distinct sub-zones with distinct meso-scale and micro-scale morphology of the phases and (even) chemistry and its creep life prediction presents considerable challenges. The present paper presents a stochastic algorithm, which can be' used for developing experimental creep-cavitation intensity versus residual life correlations for welded structures. Apart from estimates of the residual life in a mean field sense, the model can be used for predicting the reliability of the plant component in a rigorous probabilistic setting. (author)

  13. The role of perfusion computed tomography in the prediction of cerebral hyperperfusion syndrome.

    Directory of Open Access Journals (Sweden)

    Chien Hung Chang

    Full Text Available Hyperperfusion syndrome (HPS following carotid angioplasty with stenting (CAS is associated with significant morbidity and mortality. At present, there are no reliable parameters to predict HPS. The aim of this study was to clarify whether perfusion computed tomography (CT is a feasible and reliable tool in predicting HPS after CAS.We performed a retrospective case-control study of 54 patients (11 HPS patients and 43 non-HPS with unilateral severe stenosis of the carotid artery who underwent CAS. We compared the prevalence of vascular risk factors and perfusion CT parameters including regional cerebral blood volume (rCBV, regional cerebral blood flow (rCBF, and time to peak (TTP within seven days prior to CAS. Demographic information, risk factors for atherosclerosis, and perfusion CT parameters were evaluated by multivariable logistic regression analysis. The rCBV index was calculated as [(ipsilateral rCBV - contralateral rCBV/contralateral rCBV], and indices of rCBF and TTP were similarly calculated. We found that eleven patients had HPS, including five with intracranial hemorrhages (ICHs of whom three died. After a comparison with non-HPS control subjects, independent predictors of HPS included the severity of ipsilateral carotid artery stenosis, 3-hour mean systolic blood pressure (3 h SBP after CAS, pre-stenting rCBV index >0.15 and TTP index >0.22.The combination of severe ipsilateral carotid stenosis, 3 h SBP after CAS, rCBV index and TTP index provides a potential screening tool for predicting HPS in patients with unilateral carotid stenosis receiving CAS. In addition, adequate management of post-stenting blood pressure is the most important treatable factor in preventing HPS in these high risk patients.

  14. The benefit of non contrast-enhanced magnetic resonance angiography for predicting vascular access surgery outcome: a computer model perspective.

    Directory of Open Access Journals (Sweden)

    Maarten A G Merkx

    Full Text Available INTRODUCTION: Vascular access (VA surgery, a prerequisite for hemodialysis treatment of end-stage renal-disease (ESRD patients, is hampered by complication rates, which are frequently related to flow enhancement. To assist in VA surgery planning, a patient-specific computer model for postoperative flow enhancement was developed. The purpose of this study is to assess the benefit of non contrast-enhanced magnetic resonance angiography (NCE-MRA data as patient-specific geometrical input for the model-based prediction of surgery outcome. METHODS: 25 ESRD patients were included in this study. All patients received a NCE-MRA examination of the upper extremity blood vessels in addition to routine ultrasound (US. Local arterial radii were assessed from NCE-MRA and converted to model input using a linear fit per artery. Venous radii were determined with US. The effect of radius measurement uncertainty on model predictions was accounted for by performing Monte-Carlo simulations. The resulting flow prediction interval of the computer model was compared with the postoperative flow obtained from US. Patients with no overlap between model-based prediction and postoperative measurement were further analyzed to determine whether an increase in geometrical detail improved computer model prediction. RESULTS: Overlap between postoperative flows and model-based predictions was obtained for 71% of patients. Detailed inspection of non-overlapping cases revealed that the geometrical details that could be assessed from NCE-MRA explained most of the differences, and moreover, upon addition of these details in the computer model the flow predictions improved. CONCLUSIONS: The results demonstrate clearly that NCE-MRA does provide valuable geometrical information for VA surgery planning. Therefore, it is recommended to use this modality, at least for patients at risk for local or global narrowing of the blood vessels as well as for patients for whom an US-based model

  15. Automated interpretable computational biology in the clinic: a framework to predict disease severity and stratify patients from clinical data

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2017-10-01

    Full Text Available We outline an automated computational and machine learning framework that predicts disease severity and stratifies patients. We apply our framework to available clinical data. Our algorithm automatically generates insights and predicts disease severity with minimal operator intervention. The computational framework presented here can be used to stratify patients, predict disease severity and propose novel biomarkers for disease. Insights from machine learning algorithms coupled with clinical data may help guide therapy, personalize treatment and help clinicians understand the change in disease over time. Computational techniques like these can be used in translational medicine in close collaboration with clinicians and healthcare providers. Our models are also interpretable, allowing clinicians with minimal machine learning experience to engage in model building. This work is a step towards automated machine learning in the clinic.

  16. Five-year clinical and functional multislice computed tomography angiographic results after coronary implantation of the fully resorbable polymeric everolimus-eluting scaffold in patients with de novo coronary artery disease

    DEFF Research Database (Denmark)

    Onuma, Yoshinobu; Dudek, Dariusz; Thuesen, Leif

    2013-01-01

    This study sought to demonstrate the 5-year clinical and functional multislice computed tomography angiographic results after implantation of the fully resorbable everolimus-eluting scaffold (Absorb BVS, Abbott Vascular, Santa Clara, California).......This study sought to demonstrate the 5-year clinical and functional multislice computed tomography angiographic results after implantation of the fully resorbable everolimus-eluting scaffold (Absorb BVS, Abbott Vascular, Santa Clara, California)....

  17. Predictive computational modeling of the mucosal immune responses during Helicobacter pylori infection.

    Directory of Open Access Journals (Sweden)

    Adria Carbo

    Full Text Available T helper (Th cells play a major role in the immune response and pathology at the gastric mucosa during Helicobacter pylori infection. There is a limited mechanistic understanding regarding the contributions of CD4+ T cell subsets to gastritis development during H. pylori colonization. We used two computational approaches: ordinary differential equation (ODE-based and agent-based modeling (ABM to study the mechanisms underlying cellular immune responses to H. pylori and how CD4+ T cell subsets influenced initiation, progression and outcome of disease. To calibrate the model, in vivo experimentation was performed by infecting C57BL/6 mice intragastrically with H. pylori and assaying immune cell subsets in the stomach and gastric lymph nodes (GLN on days 0, 7, 14, 30 and 60 post-infection. Our computational model reproduced the dynamics of effector and regulatory pathways in the gastric lamina propria (LP in silico. Simulation results show the induction of a Th17 response and a dominant Th1 response, together with a regulatory response characterized by high levels of mucosal Treg cells. We also investigated the potential role of peroxisome proliferator-activated receptor γ (PPARγ activation on the modulation of host responses to H. pylori by using loss-of-function approaches. Specifically, in silico results showed a predominance of Th1 and Th17 cells in the stomach of the cell-specific PPARγ knockout system when compared to the wild-type simulation. Spatio-temporal, object-oriented ABM approaches suggested similar dynamics in induction of host responses showing analogous T cell distributions to ODE modeling and facilitated tracking lesion formation. In addition, sensitivity analysis predicted a crucial contribution of Th1 and Th17 effector responses as mediators of histopathological changes in the gastric mucosa during chronic stages of infection, which were experimentally validated in mice. These integrated immunoinformatics approaches

  18. Computational prediction of dust production in graphite moderated pebble bed reactors

    Science.gov (United States)

    Rostamian, Maziar

    The scope of the work reported here, which is the computational study of graphite wear behavior, supports the Nuclear Engineering University Programs project "Experimental Study and Computational Simulations of Key Pebble Bed Thermomechanics Issues for Design and Safety" funded by the US Department of Energy. In this work, modeling and simulating the contact mechanics, as anticipated in a PBR configuration, is carried out for the purpose of assessing the amount of dust generated during a full power operation year of a PBR. A methodology that encompasses finite element analysis (FEA) and micromechanics of wear is developed to address the issue of dust production and its quantification. Particularly, the phenomenon of wear and change of its rate with sliding length is the main focus of this dissertation. This work studies the wear properties of graphite by simulating pebble motion and interactions of a specific type of nuclear grade graphite, IG-11. This study consists of two perspectives: macroscale stress analysis and microscale analysis of wear mechanisms. The first is a set of FEA simulations considering pebble-pebble frictional contact. In these simulations, the mass of generated graphite particulates due to frictional contact is calculated by incorporating FEA results into Archard's equation, which is a linear correlation between wear mass and wear length. However, the experimental data by Johnson, University of Idaho, revealed that the wear rate of graphite decreases with sliding length. This is because the surfaces of the graphite pebbles become smoother over time, which results in a gradual decrease in wear rate. In order to address the change in wear rate, a more detailed analysis of wear mechanisms at room temperature is presented. In this microscale study, the wear behavior of graphite at the asperity level is studied by simulating the contact between asperities of facing surfaces. By introducing the effect of asperity removal on wear rate, a nonlinear

  19. Computational modeling to predict nitrogen balance during acute metabolic decompensation in patients with urea cycle disorders.

    Science.gov (United States)

    MacLeod, Erin L; Hall, Kevin D; McGuire, Peter J

    2016-01-01

    Nutritional management of acute metabolic decompensation in amino acid inborn errors of metabolism (AA IEM) aims to restore nitrogen balance. While nutritional recommendations have been published, they have never been rigorously evaluated. Furthermore, despite these recommendations, there is a wide variation in the nutritional strategies employed amongst providers, particularly regarding the inclusion of parenteral lipids for protein-free caloric support. Since randomized clinical trials during acute metabolic decompensation are difficult and potentially dangerous, mathematical modeling of metabolism can serve as a surrogate for the preclinical evaluation of nutritional interventions aimed at restoring nitrogen balance during acute decompensation in AA IEM. A validated computational model of human macronutrient metabolism was adapted to predict nitrogen balance in response to various nutritional interventions in a simulated patient with a urea cycle disorder (UCD) during acute metabolic decompensation due to dietary non-adherence or infection. The nutritional interventions were constructed from published recommendations as well as clinical anecdotes. Overall, dextrose alone (DEX) was predicted to be better at restoring nitrogen balance and limiting nitrogen excretion during dietary non-adherence and infection scenarios, suggesting that the published recommended nutritional strategy involving dextrose and parenteral lipids (ISO) may be suboptimal. The implications for patients with AA IEM are that the medical course during acute metabolic decompensation may be influenced by the choice of protein-free caloric support. These results are also applicable to intensive care patients undergoing catabolism (postoperative phase or sepsis), where parenteral nutritional support aimed at restoring nitrogen balance may be more tailored regarding metabolic fuel selection.

  20. Serotonergic modulation of spatial working memory: predictions from a computational network model

    Directory of Open Access Journals (Sweden)

    Maria eCano-Colino

    2013-09-01

    Full Text Available Serotonin (5-HT receptors of types 1A and 2A are massively expressed in prefrontal cortex (PFC neurons, an area associated with cognitive function. Hence, 5-HT could be effective in modulating prefrontal-dependent cognitive functions, such as spatial working memory (SWM. However, a direct association between 5-HT and SWM has proved elusive in psycho-pharmacological studies. Recently, a computational network model of the PFC microcircuit was used to explore the relationship between 5‑HT and SWM (Cano-Colino et al. 2013. This study found that both excessive and insufficient 5-HT levels lead to impaired SWM performance in the network, and it concluded that analyzing behavioral responses based on confidence reports could facilitate the experimental identification of SWM behavioral effects of 5‑HT neuromodulation. Such analyses may have confounds based on our limited understanding of metacognitive processes. Here, we extend these results by deriving three additional predictions from the model that do not rely on confidence reports. Firstly, only excessive levels of 5-HT should result in SWM deficits that increase with delay duration. Secondly, excessive 5-HT baseline concentration makes the network vulnerable to distractors at distances that were robust to distraction in control conditions, while the network still ignores distractors efficiently for low 5‑HT levels that impair SWM. Finally, 5-HT modulates neuronal memory fields in neurophysiological experiments: Neurons should be better tuned to the cued stimulus than to the behavioral report for excessive 5-HT levels, while the reverse should happen for low 5-HT concentrations. In all our simulations agonists of 5-HT1A receptors and antagonists of 5-HT2A receptors produced behavioral and physiological effects in line with global 5-HT level increases. Our model makes specific predictions to be tested experimentally and advance our understanding of the neural basis of SWM and its neuromodulation

  1. Osteoporosis prediction from the mandible using cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Barngkgei, Imad; Al Haffar, Iyad [Dept. of Oral Medicine, Faculty of Dentistry, Damascus University, Damascus (Syrian Arab Republic); Khattab, Razan [Dept. of Periodontology, Faculty of Dentistry, Damascus University, Damascus (Syrian Arab Republic)

    2014-12-15

    This study aimed to evaluate the use of dental cone-beam computed tomography (CBCT) in the diagnosis of osteoporosis among menopausal and postmenopausal women by using only a CBCT viewer program. Thirty-eight menopausal and postmenopausal women who underwent dual-energy X-ray absorptiometry (DXA) examination for hip and lumbar vertebrae were scanned using CBCT (field of view: 13 cmx15 cm; voxel size: 0.25 mm). Slices from the body of the mandible as well as the ramus were selected and some CBCT-derived variables, such as radiographic density (RD) as gray values, were calculated as gray values. Pearson's correlation, one-way analysis of variance (ANOVA), and accuracy (sensitivity and specificity) evaluation based on linear and logistic regression were performed to choose the variable that best correlated with the lumbar and femoral neck T-scores. RD of the whole bone area of the mandible was the variable that best correlated with and predicted both the femoral neck and the lumbar vertebrae T-scores; further, Pearson's correlation coefficients were 0.5/0.6 (p value=0.037/0.009). The sensitivity, specificity, and accuracy based on the logistic regression were 50%, 88.9%, and 78.4%, respectively, for the femoral neck, and 46.2%, 91.3%, and 75%, respectively, for the lumbar vertebrae. Lumbar vertebrae and femoral neck osteoporosis can be predicted with high accuracy from the RD value of the body of the mandible by using a CBCT viewer program.

  2. Computational investigation of kinetics of cross-linking reactions in proteins: importance in structure prediction.

    Science.gov (United States)

    Bandyopadhyay, Pradipta; Kuntz, Irwin D

    2009-01-01

    The determination of protein structure using distance constraints is a new and promising field of study. One implementation involves attaching residues of a protein using a cross-linking agent, followed by protease digestion, analysis of the resulting peptides by mass spectroscopy, and finally sequence threading to detect the protein folds. In the present work, we carry out computational modeling of the kinetics of cross-linking reactions in proteins using the master equation approach. The rate constants of the cross-linking reactions are estimated using the pKas and the solvent-accessible surface areas of the residues involved. This model is tested with fibroblast growth factor (FGF) and cytochrome C. It is consistent with the initial experimental rate data for individual lysine residues for cytochrome C. Our model captures all observed cross-links for FGF and almost 90% of the observed cross-links for cytochrome C, although it also predicts cross-links that were not observed experimentally (false positives). However, the analysis of the false positive results is complicated by the fact that experimental detection of cross-links can be difficult and may depend on specific experimental conditions such as pH, ionic strength. Receiver operator characteristic plots showed that our model does a good job in predicting the observed cross-links. Molecular dynamics simulations showed that for cytochrome C, in general, the two lysines come closer for the observed cross-links as compared to the false positive ones. For FGF, no such clear pattern exists. The kinetic model and MD simulation can be used to study proposed cross-linking protocols.

  3. On the Predictability of Computer simulations: Advances in Verification and Validation

    KAUST Repository

    Prudhomme, Serge

    2014-01-06

    We will present recent advances on the topics of Verification and Validation in order to assess the reliability and predictability of computer simulations. The first part of the talk will focus on goal-oriented error estimation for nonlinear boundary-value problems and nonlinear quantities of interest, in which case the error representation consists of two contributions: 1) a first contribution, involving the residual and the solution of the linearized adjoint problem, which quantifies the discretization or modeling error; and 2) a second contribution, combining higher-order terms that describe the linearization error. The linearization error contribution is in general neglected with respect to the discretization or modeling error. However, when nonlinear effects are significant, it is unclear whether ignoring linearization effects may produce poor convergence of the adaptive process. The objective will be to show how both contributions can be estimated and employed in an adaptive scheme that simultaneously controls the two errors in a balanced manner. In the second part of the talk, we will present novel approach for calibration of model parameters. The proposed inverse problem not only involves the minimization of the misfit between experimental observables and their theoretical estimates, but also an objective function that takes into account some design goals on specific design scenarios. The method can be viewed as a regularization approach of the inverse problem, one, however, that best respects some design goals for which mathematical models are intended. The inverse problem is solved by a Bayesian method to account for uncertainties in the data. We will show that it shares the same structure as the deterministic problem that one would obtain by multi-objective optimization theory. The method is illustrated on an example of heat transfer in a two-dimensional fin. The proposed approach has the main benefit that it increases the confidence in predictive

  4. Osteoporosis prediction from the mandible using cone-beam computed tomography

    Science.gov (United States)

    Al Haffar, Iyad; Khattab, Razan

    2014-01-01

    Purpose This study aimed to evaluate the use of dental cone-beam computed tomography (CBCT) in the diagnosis of osteoporosis among menopausal and postmenopausal women by using only a CBCT viewer program. Materials and Methods Thirty-eight menopausal and postmenopausal women who underwent dual-energy X-ray absorptiometry (DXA) examination for hip and lumbar vertebrae were scanned using CBCT (field of view: 13 cm×15 cm; voxel size: 0.25 mm). Slices from the body of the mandible as well as the ramus were selected and some CBCT-derived variables, such as radiographic density (RD) as gray values, were calculated as gray values. Pearson's correlation, one-way analysis of variance (ANOVA), and accuracy (sensitivity and specificity) evaluation based on linear and logistic regression were performed to choose the variable that best correlated with the lumbar and femoral neck T-scores. Results RD of the whole bone area of the mandible was the variable that best correlated with and predicted both the femoral neck and the lumbar vertebrae T-scores; further, Pearson's correlation coefficients were 0.5/0.6 (p value=0.037/0.009). The sensitivity, specificity, and accuracy based on the logistic regression were 50%, 88.9%, and 78.4%, respectively, for the femoral neck, and 46.2%, 91.3%, and 75%, respectively, for the lumbar vertebrae. Conclusion Lumbar vertebrae and femoral neck osteoporosis can be predicted with high accuracy from the RD value of the body of the mandible by using a CBCT viewer program. PMID:25473633

  5. De Novo Discovery of Structured ncRNA Motifs in Genomic Sequences

    DEFF Research Database (Denmark)

    Ruzzo, Walter L; Gorodkin, Jan

    2014-01-01

    De novo discovery of "motifs" capturing the commonalities among related noncoding ncRNA structured RNAs is among the most difficult problems in computational biology. This chapter outlines the challenges presented by this problem, together with some approaches towards solving them, with an emphas...... on an approach based on the CMfinder CMfinder program as a case study. Applications to genomic screens for novel de novo structured ncRNA ncRNA s, including structured RNA elements in untranslated portions of protein-coding genes, are presented.......De novo discovery of "motifs" capturing the commonalities among related noncoding ncRNA structured RNAs is among the most difficult problems in computational biology. This chapter outlines the challenges presented by this problem, together with some approaches towards solving them, with an emphasis...

  6. From prediction error to incentive salience: mesolimbic computation of reward motivation

    Science.gov (United States)

    Berridge, Kent C.

    2011-01-01

    Reward contains separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally important in reward circuitry, and even more directly controls behavior. Modeling the motivational component requires recognition of additional control factors besides learning. Here I will discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g., drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously-learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. Thus a consequence of the difference between incentive salience and learning can be to decouple cue-triggered motivation of the moment from previously learned values of how good the associated reward has been in the past. Another consequence can be to produce irrationally strong motivation urges that are not justified by any memories of previous reward values (and without distorting associative predictions of future reward value). Such irrationally strong motivation may be especially problematic in addiction. To comprehend these phenomena, future models of mesocorticolimbic reward function should address the neurobiological state factors that participate to control generation of incentive salience. PMID:22487042

  7. Cyclic behavior of 316L steel predicted by means of finite element computations

    International Nuclear Information System (INIS)

    Liu, J.; Sauzay, M.; Robertson, C.; Liu, J.

    2011-01-01

    The cyclic behavior of 316L steels is predicted based on crystalline elastoplastic constitutive laws. Calculations are performed with the finite element software CAST3M, using a polycrystalline mesh where the individual grains are modeled as cubes, having random crystallographic orientations. At the grain scale, the constitutive law parameters are adjusted using single crystal cyclic stress strain curves (CSSCs) from literature. Calculations are performed for different loading conditions (uniaxial tension-compression, biaxial tension-compression and alternated torsion) and a large range of three remote plastic strain amplitudes. We obtained 3 close macroscopic CSSCs. Somewhat lower stresses are obtained in torsion, particularly at high plastic strain amplitude. Our results are in agreement with all the published experimental data. The mean plastic strain is computed in each grain, yielding a particular polycrystalline mean grain plastic strain distribution for each loading condition and remote plastic strain. The plastic strain scatter increases for decreasing macroscopic strains. The number of cycles to the first micro-crack initiation corresponding to the aforesaid plastic strain distributions is then calculated using a surface roughness based initiation criterion. The effect of the different loading conditions is finally discussed. (authors)

  8. Abdominal fat distribution on computed tomography predicts ureteric calculus fragmentation by shock wave lithotripsy

    International Nuclear Information System (INIS)

    Juan, Hsu-Cheng; Chou, Yii-Her; Lin, Hung-Yu; Yang, Yi-Hsin; Shih, Paul Ming-Chen; Chuang, Shu-Mien; Shen, Jung-Tsung; Juan, Yung-Shun

    2012-01-01

    To assess the effects of abdominal fat on shock wave lithotripsy (SWL). We used pre-SWL unenhanced computed tomography (CT) to evaluate the impact of abdominal fat distribution and calculus characteristics on the outcome of SWL. One hundred and eighty-five patients with a solitary ureteric calculus treated with SWL were retrospectively reviewed. Each patient underwent unenhanced CT within 1 month before SWL treatment. Treatment outcomes were evaluated 1 month later. Unenhanced CT parameters, including calculus surface area, Hounsfield unit (HU) density, abdominal fat area and skin to calculus distance (SSD) were analysed. One hundred and twenty-eight of the 185 patients were found to be calculus-free following treatment. HU density, total fat area, visceral fat area and SSD were identified as significant variables on multivariate logistic regression analysis. The receiver-operating characteristic analyses showed that total fat area, para/perirenal fat area and visceral fat area were sensitive predictors of SWL outcomes. This study revealed that higher quantities of abdominal fat, especially visceral fat, are associated with a lower calculus-free rate following SWL treatment. Unenhanced CT is a convenient technique for diagnosing the presence of a calculus, assessing the intra-abdominal fat distribution and thereby helping to predict the outcome of SWL. (orig.)

  9. Simple area-based measurement for multidetector computed tomography to predict left ventricular size

    International Nuclear Information System (INIS)

    Schlett, Christopher L.; Kwait, Dylan C.; Mahabadi, Amir A.; Hoffmann, Udo; Bamberg, Fabian; O'Donnell, Christopher J.; Fox, Caroline S.

    2010-01-01

    Measures of left ventricular (LV) mass and dimensions are independent predictors of morbidity and mortality. We determined whether an axial area-based method by computed tomography (CT) provides an accurate estimate of LV mass and volume. A total of 45 subjects (49% female, 56.0 ± 12 years) with a wide range of LV geometry underwent contrast-enhanced 64-slice CT. LV mass and volume were derived from 3D data. 2D images were analysed to determine LV area, the direct transverse cardiac diameter (dTCD) and the cardiothoracic ratio (CTR). Furthermore, feasibility was confirmed in 100 Framingham Offspring Cohort subjects. 2D measures of LV area, dTCD and CTR were 47.3 ± 8 cm 2 , 14.7 ± 1.5 cm and 0.54 ± 0.05, respectively. 3D-derived LV volume (end-diastolic) and mass were 148.9 ± 45 cm 3 and 124.2 ± 34 g, respectively. Excellent inter- and intra-observer agreement were shown for 2D LV area measurements (both intraclass correlation coefficients (ICC) = 0.99, p 0.27). Compared with traditionally used CTR, LV size can be accurately predicted based on a simple and highly reproducible axial LV area-based measurement. (orig.)

  10. Semiquantitative dynamic computed tomography to predict response to anti-platelet therapy in acute cerebral infarction

    International Nuclear Information System (INIS)

    Chokyu, K.; Shimizu, K.; Fukumoto, M.; Mori, T.; Mokudai, T.; Mori, K.

    2002-01-01

    We investigated whether dynamic computed tomography (CT) in patients with acute cerebral infarction could identify patients likely to respond to anti-platelet therapy. Seventy patients underwent semiquantitative dynamic CT within 6 h as well as cerebral angiography. All then received anti-platelet therapy with a thromboxane A2 synthetase inhibitor. Peak value (pv) and time-to-peak (tp) (time-density curves) for the Sylvian fissure were extracted from dynamic CT data and standardizing interpatient data, two indices, PV/TP index and TP index, were prepared following a standard semiquantitative manner. Both PV/TP index and TP index were effective in discriminating between 48 responders (modified Rankin scale (mRS): 0 to 2) and 22 non-responders (mRS: 3 to 5, or death: 6; both P 1.1) and non-compensated rCBF. Intermediate PV/TP values could not predict outcome. Dynamic CT prior to therapy can identify patients with acute cerebral infarction who are treatable with anti-platelet therapy alone. (orig.)

  11. Abdominal fat distribution on computed tomography predicts ureteric calculus fragmentation by shock wave lithotripsy

    Energy Technology Data Exchange (ETDEWEB)

    Juan, Hsu-Cheng; Chou, Yii-Her [Kaohsiung Medical University Hospital, Department of Urology, Kaohsiung (China); Lin, Hung-Yu [Kaohsiung Medical University, Graduate Institute of Medicine, Kaohsiung (China); E-Da Hospital/ I-Shou University, Department of Urology, Kaohsiung (China); Yang, Yi-Hsin [Kaohsiung Medical University, Institute of Oral Health Sciences, Kaohsiung (China); Shih, Paul Ming-Chen [Kaohsiung Municipal Hsiao-Kang Hospital, Department of Radiology, Kaohsiung (China); Kaohsiung Medical University, Department of Radiology, Kaohsiung (China); Chuang, Shu-Mien [Yuh-Ing Junior College of Health Care and Management, Kaohsiung (China); Shen, Jung-Tsung [Kaohsiung Municipal Hsiao-Kang Hospital, Department of Urology, Kaohsiung (China); Juan, Yung-Shun [Kaohsiung Medical University Hospital, Department of Urology, Kaohsiung (China); Kaohsiung Medical University, Graduate Institute of Medicine, Kaohsiung (China); Kaohsiung Medical University, Department of Urology, Faculty of Medicine, Kaohsiung (China)

    2012-08-15

    To assess the effects of abdominal fat on shock wave lithotripsy (SWL). We used pre-SWL unenhanced computed tomography (CT) to evaluate the impact of abdominal fat distribution and calculus characteristics on the outcome of SWL. One hundred and eighty-five patients with a solitary ureteric calculus treated with SWL were retrospectively reviewed. Each patient underwent unenhanced CT within 1 month before SWL treatment. Treatment outcomes were evaluated 1 month later. Unenhanced CT parameters, including calculus surface area, Hounsfield unit (HU) density, abdominal fat area and skin to calculus distance (SSD) were analysed. One hundred and twenty-eight of the 185 patients were found to be calculus-free following treatment. HU density, total fat area, visceral fat area and SSD were identified as significant variables on multivariate logistic regression analysis. The receiver-operating characteristic analyses showed that total fat area, para/perirenal fat area and visceral fat area were sensitive predictors of SWL outcomes. This study revealed that higher quantities of abdominal fat, especially visceral fat, are associated with a lower calculus-free rate following SWL treatment. Unenhanced CT is a convenient technique for diagnosing the presence of a calculus, assessing the intra-abdominal fat distribution and thereby helping to predict the outcome of SWL. (orig.)

  12. Computational tools for genome-wide miRNA prediction and study

    KAUST Repository

    Malas, T.B.; Ravasi, Timothy

    2012-01-01

    MicroRNAs (miRNAs) are single-stranded non-coding RNA susually of 22 nucleotidesin length that play an important post-transcriptional regulation role in many organisms. MicroRNAs bind a seed sequence to the 3-untranslated region (UTR) region of the target messenger RNA (mRNA), inducing degradation or inhibition of translation and resulting in a reduction in the protein level. This regulatory mechanism is central to many biological processes and perturbation could lead to diseases such as cancer. Given the biological importance, of miRNAs, there is a great need to identify and study their targets and functions. However, miRNAs are very difficult to clone in the lab and this has hindered the identification of novel miRNAs. Next-generation sequencing coupled with new computational tools has recently evolved to help researchers efficiently identify large numbers of novel miRNAs. In this review, we describe recent miRNA prediction tools and discuss their priorities, advantages and disadvantages. Malas and Ravasi.

  13. Evaluation of chest computed tomography in patients after pneumonectomy to predict contralateral pneumothorax

    International Nuclear Information System (INIS)

    Maniwa, Tomohiro; Saito, Yukihito; Saito, Tomohito; Kaneda, Hiroyuki; Imamura, Hiroji

    2009-01-01

    Contralateral pneumothorax is a severe complication after pneumonectomy. We evaluated the mediastinal shift and the residual lung in patients who had undergone pneumonectomy to predict the incidence of contralateral pneumothorax. We evaluated 21 cases of pneumonectomy performed from 1996 to 2006. For this study, we excluded patients with recurrent neoplasm, empyema, or hemothorax. We reviewed the computed tomography (CT) results of 13 patients who had undergone pneumonectomy to compare the bullae in the residual lungs, carina shifts, and herniation of the residual lungs before and after pneumonectomy. When evaluating the degree of herniation 4-6 cm below the carina, the anterior and posterior pulmonary hernias were classified as grade A, B, or C. We also investigated the preoperative respiratory function in all 13 patients. Two patients suffered contralateral pneumothorax after left pneumonectomy. Both patients who suffered contralateral pneumothorax after pneumonectomy had bullae. The percentage forced expiratory volume in 1 s (FEV 1.0% ) was <70% in these two patients. Carina shifts and lung herniation were found to be greater after left pneumonectomy than after right pneumonectomy. The bullae in the lung and obstructive pulmonary disease are associated not only with spontaneous pneumothorax but also with contralateral pneumothorax after pneumonectomy. Lung herniation and mediastinal shift are greater after left pneumonectomy than after right pneumonectomy, which may be related to contralateral pneumothorax after pneumonectomy. (author)

  14. Analysis and computer program for rupture-risk prediction of abdominal aortic aneurysms

    Directory of Open Access Journals (Sweden)

    Li Zhonghua

    2006-03-01

    Full Text Available Abstract Background Ruptured abdominal aortic aneurysms (AAAs are the 13th leading cause of death in the United States. While AAA rupture may occur without significant warning, its risk assessment is generally based on critical values of the maximum AAA diameter (>5 cm and AAA-growth rate (>0.5 cm/year. These criteria may be insufficient for reliable AAA-rupture risk assessment especially when predicting possible rupture of smaller AAAs. Methods Based on clinical evidence, eight biomechanical factors with associated weighting coefficients were determined and summed up in terms of a dimensionless, time-dependent severity parameter, SP(t. The most important factor is the maximum wall stress for which a semi-empirical correlation has been developed. Results The patient-specific SP(t indicates the risk level of AAA rupture and provides a threshold value when surgical intervention becomes necessary. The severity parameter was validated with four clinical cases and its application is demonstrated for two AAA cases. Conclusion As part of computational AAA-risk assessment and medical management, a patient-specific severity parameter 0

  15. Computational tools for genome-wide miRNA prediction and study

    KAUST Repository

    Malas, T.B.

    2012-11-02

    MicroRNAs (miRNAs) are single-stranded non-coding RNA susually of 22 nucleotidesin length that play an important post-transcriptional regulation role in many organisms. MicroRNAs bind a seed sequence to the 3-untranslated region (UTR) region of the target messenger RNA (mRNA), inducing degradation or inhibition of translation and resulting in a reduction in the protein level. This regulatory mechanism is central to many biological processes and perturbation could lead to diseases such as cancer. Given the biological importance, of miRNAs, there is a great need to identify and study their targets and functions. However, miRNAs are very difficult to clone in the lab and this has hindered the identification of novel miRNAs. Next-generation sequencing coupled with new computational tools has recently evolved to help researchers efficiently identify large numbers of novel miRNAs. In this review, we describe recent miRNA prediction tools and discuss their priorities, advantages and disadvantages. Malas and Ravasi.

  16. Computability, Gödel's incompleteness theorem, and an inherent limit on the predictability of evolution.

    Science.gov (United States)

    Day, Troy

    2012-04-07

    The process of evolutionary diversification unfolds in a vast genotypic space of potential outcomes. During the past century, there have been remarkable advances in the development of theory for this diversification, and the theory's success rests, in part, on the scope of its applicability. A great deal of this theory focuses on a relatively small subset of the space of potential genotypes, chosen largely based on historical or contemporary patterns, and then predicts the evolutionary dynamics within this pre-defined set. To what extent can such an approach be pushed to a broader perspective that accounts for the potential open-endedness of evolutionary diversification? There have been a number of significant theoretical developments along these lines but the question of how far such theory can be pushed has not been addressed. Here a theorem is proven demonstrating that, because of the digital nature of inheritance, there are inherent limits on the kinds of questions that can be answered using such an approach. In particular, even in extremely simple evolutionary systems, a complete theory accounting for the potential open-endedness of evolution is unattainable unless evolution is progressive. The theorem is closely related to Gödel's incompleteness theorem, and to the halting problem from computability theory.

  17. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    Science.gov (United States)

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present

  18. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    Science.gov (United States)

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial

  19. The predictive value of single-photon emission computed tomography/computed tomography for sentinel lymph node localization in head and neck cutaneous malignancy.

    Science.gov (United States)

    Remenschneider, Aaron K; Dilger, Amanda E; Wang, Yingbing; Palmer, Edwin L; Scott, James A; Emerick, Kevin S

    2015-04-01

    Preoperative localization of sentinel lymph nodes in head and neck cutaneous malignancies can be aided by single-photon emission computed tomography/computed tomography (SPECT/CT); however, its true predictive value for identifying lymph nodes intraoperatively remains unquantified. This study aims to understand the sensitivity, specificity, and positive and negative predictive values of SPECT/CT in sentinel lymph node biopsy for cutaneous malignancies of the head and neck. Blinded retrospective imaging review with comparison to intraoperative gamma probe confirmed sentinel lymph nodes. A consecutive series of patients with a head and neck cutaneous malignancy underwent preoperative SPECT/CT followed by sentinel lymph node biopsy with a gamma probe. Two nuclear medicine physicians, blinded to clinical data, independently reviewed each SPECT/CT. Activity within radiographically defined nodal basins was recorded and compared to intraoperative gamma probe findings. Sensitivity, specificity, and negative and positive predictive values were calculated with subgroup stratification by primary tumor site. Ninety-two imaging reads were performed on 47 patients with cutaneous malignancy who underwent SPECT/CT followed by sentinel lymph node biopsy. Overall sensitivity was 73%, specificity 92%, positive predictive value 54%, and negative predictive value 96%. The predictive ability of SPECT/CT to identify the basin or an adjacent basin containing the single hottest node was 92%. SPECT/CT overestimated uptake by an average of one nodal basin. In the head and neck, SPECT/CT has higher reliability for primary lesions of the eyelid, scalp, and cheek. SPECT/CT has high sensitivity, specificity, and negative predictive value, but may overestimate relevant nodal basins in sentinel lymph node biopsy. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Pathological fracture prediction in patients with metastatic lesions can be improved with quantitative computed tomography based computer models

    NARCIS (Netherlands)

    Tanck, Esther; van Aken, Jantien B.; van der Linden, Yvette M.; Schreuder, H.W. Bart; Binkowski, Marcin; Huizenga, Henk; Verdonschot, Nico

    2009-01-01

    Purpose: In clinical practice, there is an urgent need to improve the prediction of fracture risk for cancer patients with bone metastases. The methods that are currently used to estimate fracture risk are dissatisfying, hence affecting the quality of life of patients with a limited life expectancy.

  1. Energy Consumption and Indoor Environment Predicted by a Combination of Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm

    2003-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution is introduced for improvement of the predictions of both the energy consumption and the indoor environment.The article describes a calculation...

  2. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  3. de novo'' aneurysms following endovascular procedures

    International Nuclear Information System (INIS)

    Briganti, F.; Cirillo, S.; Caranci, F.; Esposito, F.; Maiuri, F.

    2002-01-01

    Two personal cases of ''de novo'' aneurysms of the anterior communicating artery (ACoA) occurring 9 and 4 years, respectively, after endovascular carotid occlusion are described. A review of the 30 reported cases (including our own two) of ''de novo'' aneurysms after occlusion of the major cerebral vessels has shown some features, including a rather long time interval after the endovascular procedure of up to 20-25 years (average 9.6 years), a preferential ACoA (36.3%) and internal carotid artery-posterior communicating artery (ICA-PCoA) (33.3%) location of the ''de novo'' aneurysms, and a 10% rate of multiple aneurysms. These data are compared with those of the group of reported spontaneous ''de novo'' aneurysms after SAH or previous aneurysm clipping. We agree that the frequency of ''de novo'' aneurysms after major-vessel occlusion (two among ten procedures in our series, or 20%) is higher than commonly reported (0 to 11%). For this reason, we suggest that patients who have been submitted to endovascular major-vessel occlusion be followed up for up to 20-25 years after the procedure, using non-invasive imaging studies such as MR angiography and high-resolution CT angiography. On the other hand, periodic digital angiography has a questionable risk-benefit ratio; it may be used when a ''de novo'' aneurysm is detected or suspected on non-invasive studies. The progressive enlargement of the ACoA after carotid occlusion, as described in our case 1, must be considered a radiological finding of risk for ''de novo'' aneurysm formation. (orig.)

  4. Soft Computing Approach to Evaluate and Predict Blast-Induced Ground Vibration

    Science.gov (United States)

    Khandelwal, Manoj

    2010-05-01

    the same excavation site, different predictors give different values of safe PPV vis-à-vis safe charge per delay. There is no uniformity in the predicted result by different predictors. All vibration predictor equations have their site specific constants. Therefore, they cannot be used in a generalized way with confidence and zero level of risk. To overcome on this aspect new soft computing tools like artificial neural network (ANN) has attracted because of its ability to learn from the pattern acquainted before. ANN has the ability to learn from patterns acquainted before. It is a highly interconnected network of a large number of processing elements called neurons in an architecture inspired by the brain. ANN can be massively parallel and hence said to exhibit parallel distributed processing. Once, the network has been trained, with sufficient number of sample data sets, it can make reliable and trustworthy predictions on the basis of its previous learning, about the output related to new input data set of similar pattern. This paper deals the application of ANN for the prediction of ground vibration by taking into consideration of maximum charge per delay and distance between blast face to monitoring point. To investigate the appropriateness of this approach, the predictions by ANN have been also compared with other vibration predictor equations.

  5. Frequency Domain Computer Programs for Prediction and Analysis of Rail Vehicle Dynamics : Volume 1. Technical Report

    Science.gov (United States)

    1975-12-01

    Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume I defines the general analytical capabilities required for computer programs applicable to single rail vehi...

  6. Simulation and high performance computing-Building a predictive capability for fusion

    International Nuclear Information System (INIS)

    Strand, P.I.; Coelho, R.; Coster, D.; Eriksson, L.-G.; Imbeaux, F.; Guillerminet, Bernard

    2010-01-01

    The Integrated Tokamak Modelling Task Force (ITM-TF) is developing an infrastructure where the validation needs, as being formulated in terms of multi-device data access and detailed physics comparisons aiming for inclusion of synthetic diagnostics in the simulation chain, are key components. As the activity and the modelling tools are aimed for general use, although focused on ITER plasmas, a device independent approach to data transport and a standardized approach to data management (data structures, naming, and access) is being developed in order to allow cross-validation between different fusion devices using a single toolset. Extensive work has already gone into, and is continuing to go into, the development of standardized descriptions of the data (Consistent Physical Objects). The longer term aim is a complete simulation platform which is expected to last and be extended in different ways for the coming 30 years. The technical underpinning is therefore of vital importance. In particular the platform needs to be extensible and open-ended to be able to take full advantage of not only today's most advanced technologies but also be able to marshal future developments. As a full level comprehensive prediction of ITER physics rapidly becomes expensive in terms of computing resources, the simulation framework needs to be able to use both grid and HPC computing facilities. Hence data access and code coupling technologies are required to be available for a heterogeneous, possibly distributed, environment. The developments in this area are pursued in a separate project-EUFORIA (EU Fusion for ITER Applications) which is providing about 15 professional person year (ppy) per annum from 14 different institutes. The range and size of the activity is not only technically challenging but is providing some unique management challenges in that a large and geographically distributed team (a truly pan-European set of researchers) need to be coordinated on a fairly detailed

  7. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    Science.gov (United States)

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  8. Accuracy of cone-beam computed tomography in predicting the diameter of unerupted teeth.

    Science.gov (United States)

    Nguyen, Emerald; Boychuk, Darrell; Orellana, Maria

    2011-08-01

    An accurate prediction of the mesiodistal diameter (MDD) of the erupting permanent teeth is essential in orthodontic diagnosis and treatment planning during the mixed dentition period. Our objective was to test the accuracy and reproducibility of cone-beam computed tomography (CBCT) in predicting the MDD of unerupted teeth. Our secondary objective was to determine the accuracy and reproducibility of 3 viewing methods by using 2 CBCT software programs, InVivoDental (version 4.0; Anatomage, San Jose, Calif) and CBWorks (version 3.0, CyberMed, Seoul, Korea) in measuring the MDD of teeth in models simulating unerupted teeth. CBCT data were collected on the CB MercuRay (Hitachi Medical Corporation, Tokyo, Japan). Models of unerupted teeth (n = 25), created by embedding 25 tooth samples into a polydimethylsiloxane polymer with a similar density to tissues surrounding teeth, were scanned and measured by 2 investigators. Repeated MDD measurements of each sample were made by using 3 CBCT viewing methods: InVivo Section, InVivo Volume Render (both Anatomage), and CBWorks Volume Render (version 3.0, CyberMed). These measurements were then compared with the MDD physically measured by digital calipers before the teeth were embedded and scanned. All 3 of the new methods had mean measurements that were statistically significantly less (P <0.0001) than the physical method, adjusting for investigator and tooth effects. Specifically, InVivo Section measurements were 0.3 mm (95% CI, -0.4 to -0.2) less than the measurements with calipers, InVivo Volume Render measurements were 0.5 mm less (95% CI, -0.6 to -0.4) than those with calipers, and CBWorks Volume Render measurements were 0.4 mm less (95% CI, -0.4 to -0.3) than those with calipers. Overall, there were high correlation values among the 3 viewing methods, indicating that CBCT can be used to measure the MDD of unerupted teeth. The InVivo Section method had the greatest correlation with the calipers. Copyright © 2011 American

  9. A Computational Gene Expression Score for Predicting Immune Injury in Renal Allografts.

    Directory of Open Access Journals (Sweden)

    Tara K Sigdel

    Full Text Available Whole genome microarray meta-analyses of 1030 kidney, heart, lung and liver allograft biopsies identified a common immune response module (CRM of 11 genes that define acute rejection (AR across different engrafted tissues. We evaluated if the CRM genes can provide a molecular microscope to quantify graft injury in acute rejection (AR and predict risk of progressive interstitial fibrosis and tubular atrophy (IFTA in histologically normal kidney biopsies.Computational modeling was done on tissue qPCR based gene expression measurements for the 11 CRM genes in 146 independent renal allografts from 122 unique patients with AR (n = 54 and no-AR (n = 92. 24 demographically matched patients with no-AR had 6 and 24 month paired protocol biopsies; all had histologically normal 6 month biopsies, and 12 had evidence of progressive IFTA (pIFTA on their 24 month biopsies. Results were correlated with demographic, clinical and pathology variables.The 11 gene qPCR based tissue CRM score (tCRM was significantly increased in AR (5.68 ± 0.91 when compared to STA (1.29 ± 0.28; p < 0.001 and pIFTA (7.94 ± 2.278 versus 2.28 ± 0.66; p = 0.04, with greatest significance for CXCL9 and CXCL10 in AR (p <0.001 and CD6 (p<0.01, CXCL9 (p<0.05, and LCK (p<0.01 in pIFTA. tCRM was a significant independent correlate of biopsy confirmed AR (p < 0.001; AUC of 0.900; 95% CI = 0.705-903. Gene expression modeling of 6 month biopsies across 7/11 genes (CD6, INPP5D, ISG20, NKG7, PSMB9, RUNX3, and TAP1 significantly (p = 0.037 predicted the development of pIFTA at 24 months.Genome-wide tissue gene expression data mining has supported the development of a tCRM-qPCR based assay for evaluating graft immune inflammation. The tCRM score quantifies injury in AR and stratifies patients at increased risk of future pIFTA prior to any perturbation of graft function or histology.

  10. From prediction error to incentive salience: mesolimbic computation of reward motivation.

    Science.gov (United States)

    Berridge, Kent C

    2012-04-01

    Reward contains separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally important in reward circuitry, and even more directly controls behavior. Modeling the motivational component requires recognition of additional control factors besides learning. Here I discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g. drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. Thus, one consequence of the difference between incentive salience and learning can be to decouple cue-triggered motivation of the moment from previously learned values of how good the associated reward has been in the past. Another consequence can be to produce irrationally strong motivation urges that are not justified by any memories of previous reward values (and without distorting associative predictions of future reward value). Such irrationally strong motivation may be especially problematic in addiction. To understand these phenomena, future models of mesocorticolimbic reward function should address the neurobiological state factors that participate to control generation of incentive salience. © 2012 The Author. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  11. Abdominal fat distribution on computed tomography predicts ureteric calculus fragmentation by shock wave lithotripsy.

    Science.gov (United States)

    Juan, Hsu-Cheng; Lin, Hung-Yu; Chou, Yii-Her; Yang, Yi-Hsin; Shih, Paul Ming-Chen; Chuang, Shu-Mien; Shen, Jung-Tsung; Juan, Yung-Shun

    2012-08-01

    To assess the effects of abdominal fat on shock wave lithotripsy (SWL). We used pre-SWL unenhanced computed tomography (CT) to evaluate the impact of abdominal fat distribution and calculus characteristics on the outcome of SWL. One hundred and eighty-five patients with a solitary ureteric calculus treated with SWL were retrospectively reviewed. Each patient underwent unenhanced CT within 1 month before SWL treatment. Treatment outcomes were evaluated 1 month later. Unenhanced CT parameters, including calculus surface area, Hounsfield unit (HU) density, abdominal fat area and skin to calculus distance (SSD) were analysed. One hundred and twenty-eight of the 185 patients were found to be calculus-free following treatment. HU density, total fat area, visceral fat area and SSD were identified as significant variables on multivariate logistic regression analysis. The receiver-operating characteristic analyses showed that total fat area, para/perirenal fat area and visceral fat area were sensitive predictors of SWL outcomes. This study revealed that higher quantities of abdominal fat, especially visceral fat, are associated with a lower calculus-free rate following SWL treatment. Unenhanced CT is a convenient technique for diagnosing the presence of a calculus, assessing the intra-abdominal fat distribution and thereby helping to predict the outcome of SWL. • Unenhanced CT is now widely used to assess ureteric calculi. • The same CT protocol can provide measurements of abdominal fat distribution. • Ureteric calculi are usually treated by shock wave lithotripsy (SWL). • Greater intra-abdominal fat stores are generally associated with poorer SWL results.

  12. Survival Prediction in Pancreatic Ductal Adenocarcinoma by Quantitative Computed Tomography Image Analysis.

    Science.gov (United States)

    Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K

    2018-04-01

    Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.

  13. Absolute Hounsfield unit measurement on noncontrast computed tomography cannot accurately predict struvite stone composition.

    Science.gov (United States)

    Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj

    2013-02-01

    The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if pR=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the suspicion for a pure struvite calculus.

  14. Clinical manifestations that predict abnormal brain computed tomography (CT in children with minor head injury

    Directory of Open Access Journals (Sweden)

    Nesrin Alharthy

    2015-01-01

    Full Text Available Background: Computed tomography (CT used in pediatric pediatrics brain injury (TBI to ascertain neurological manifestations. Nevertheless, this practice is associated with adverse effects. Reports in the literature suggest incidents of morbidity and mortality in children due to exposure to radiation. Hence, it is found imperative to search for a reliable alternative. Objectives: The aim of this study is to find a reliable clinical alternative to detect an intracranial injury without resorting to the CT. Materials and Methods: Retrospective cross-sectional study was undertaken in patients (1-14 years with blunt head injury and having a Glasgow Coma Scale (GCS of 13-15 who had CT performed on them. Using statistical analysis, the correlation between clinical examination and positive CT manifestation is analyzed for different age-groups and various mechanisms of injury. Results: No statistically significant association between parameteres such as Loss of Consciousness, ′fall′ as mechanism of injury, motor vehicle accidents (MVA, more than two discrete episodes of vomiting and the CT finding of intracranial injury could be noted. Analyzed data have led to believe that GCS of 13 at presentation is the only important clinical predictor of intracranial injury. Conclusion: Retrospective data, small sample size and limited number of factors for assessing clinical manifestation might present constraints on the predictive rule that was derived from this review. Such limitations notwithstanding, the decision to determine which patients should undergo neuroimaging is encouraged to be based on clinical judgments. Further analysis with higher sample sizes may be required to authenticate and validate findings.

  15. Incidental breast masses detected by computed tomography: are any imaging features predictive of malignancy?

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom)], E-mail: Gareth.Porter@phnt.swest.nhs.uk; Steel, J.; Paisley, K.; Watkins, R. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom); Holgate, C. [Department of Histopathology, Derriford Hospital, Plymouth (United Kingdom)

    2009-05-15

    Aim: To review the outcome of further assessment of breast abnormalities detected incidentally by multidetector computed tomography (MDCT) and to determine whether any MDCT imaging features were predictive of malignancy. Material and methods: The outcome of 34 patients referred to the Primrose Breast Care Unit with breast abnormalities detected incidentally using MDCT was prospectively recorded. Women with a known diagnosis of breast cancer were excluded. CT imaging features and histological diagnoses were recorded and the correlation assessed using Fisher's exact test. Results: Of the 34 referred patients a malignant diagnosis was noted in 11 (32%). There were 10 breast malignancies (seven invasive ductal carcinomas, one invasive lobular carcinoma, two metastatic lesions) and one axillary lymphoma. CT features suggestive of breast malignancy were spiculation [6/10 (60%) versus 0/24 (0%) p = 0.0002] and associated axillary lymphadenopathy [3/10 (33%) versus 0/20 (0%) p = 0.030]. Conversely, a well-defined mass was suggestive of benign disease [10/24 (42%) versus 0/10 (0%); p = 0.015]. Associated calcification, ill-definition, heterogeneity, size, and multiplicity of lesions were not useful discriminating CT features. There was a non-significant trend for lesions in involuted breasts to be more frequently malignant than in dense breasts [6/14 (43%) versus 4/20 (20%) p = 0.11]. Conclusion: In the present series there was a significant rate (32%) of malignancy in patients referred to the breast clinic with CT-detected incidental breast lesions. The CT features of spiculation or axillary lymphadenopathy are strongly suggestive of malignancy.

  16. Characteristic detected on computed tomography angiography predict coronary artery plaque progression in non-culprit lesions

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Ya Hang; Zhou, Jia Zhou; Zhou, Ying; Yang, Xiaobo; Yang, Jun Jie; Chen, Yun Dai [Dept. of Cardiology, Chinese PLA General Hospital, Beijing (China)

    2017-06-15

    This study sought to determine whether variables detected on coronary computed tomography angiography (CCTA) would predict plaque progression in non-culprit lesions (NCL). In this single-center trial, we analyzed 103 consecutive patients who were undergoing CCTA and percutaneous coronary intervention (PCI) for culprit lesions. Follow-up CCTA was scheduled 12 months after the PCI, and all patients were followed for 3 years after their second CCTA examination. High-risk plaque features and epicardial adipose tissue (EAT) volume were assessed by CCTA. Each NCL stenosis grade was compared visually between two CCTA scans to detect plaque progression, and patients were stratified into two groups based on this. Logistic regression analysis was used to evaluate the factors that were independently associated with plaque progression in NCLs. Time-to-event curves were compared using the log-rank statistic. Overall, 34 of 103 patients exhibited NCL plaque progression (33%). Logistic regression analyses showed that the NCL progression was associated with a history of ST-elevated myocardial infarction (odds ratio [OR] = 5.855, 95% confidence interval [CI] = 1.391–24.635, p = 0.016), follow-up low-density lipoprotein cholesterol level (OR = 6.832, 95% CI = 2.103–22.200, p = 0.001), baseline low-attenuation plaque (OR = 7.311, 95% CI = 1.242–43.028, p = 0.028) and EAT (OR = 1.015, 95% CI = 1.000–1.029, p = 0.044). Following the second CCTA examination, major adverse cardiac events (MACEs) were observed in 12 patients, and NCL plaque progression was significantly associated with future MACEs (log rank p = 0.006). Noninvasive assessment of NCLs by CCTA has potential prognostic value.

  17. Use of computed tomography assessed kidney length to predict split renal GFR in living kidney donors

    Energy Technology Data Exchange (ETDEWEB)

    Gaillard, Francois; Fournier, Catherine; Leon, Carine; Legendre, Christophe [Paris Descartes University, AP-HP, Hopital Necker-Enfants Malades, Renal Transplantation Department, Paris (France); Pavlov, Patrik [Linkoeping University, Linkoeping (Sweden); Tissier, Anne-Marie; Correas, Jean-Michel [Paris Descartes University, AP-HP, Hopital Necker-Enfants Malades, Radiology Department, Paris (France); Harache, Benoit; Hignette, Chantal; Weinmann, Pierre [Paris Descartes University, AP-HP, Hopital Europeen Georges Pompidou, Nuclear Medicine Department, Paris (France); Eladari, Dominique [Paris Descartes University, and INSERM, Unit 970, AP-HP, Hopital Europeen Georges Pompidou, Physiology Department, Paris (France); Timsit, Marc-Olivier; Mejean, Arnaud [Paris Descartes University, AP-HP, Hopital Europeen Georges Pompidou, Urology Department, Paris (France); Friedlander, Gerard; Courbebaisse, Marie [Paris Descartes University, and INSERM, Unit 1151, AP-HP, Hopital Europeen Georges Pompidou, Physiology Department, Paris (France); Houillier, Pascal [Paris Descartes University, INSERM, Unit umrs1138, and CNRS Unit erl8228, AP-HP, Hopital Europeen Georges Pompidou, Physiology Department, Paris (France)

    2017-02-15

    Screening of living kidney donors may require scintigraphy to split glomerular filtration rate (GFR). To determine the usefulness of computed tomography (CT) to split GFR, we compared scintigraphy-split GFR to CT-split GFR. We evaluated CT-split GFR as a screening test to detect scintigraphy-split GFR lower than 40 mL/min/1.73 m{sup 2}/kidney. This was a monocentric retrospective study on 346 potential living donors who had GFR measurement, renal scintigraphy, and CT. We predicted GFR for each kidney by splitting GFR using the following formula: Volume-split GFR for a given kidney = measured GFR*[volume of this kidney/(volume of this kidney + volume of the opposite kidney)]. The same formula was used for length-split GFR. We compared length- and volume-split GFR to scintigraphy-split GFR at donation and with a 4-year follow-up. A better correlation was observed between length-split GFR and scintigraphy-split GFR (r = 0.92) than between volume-split GFR and scintigraphy-split GFR (r = 0.89). A length-split GFR threshold of 45 mL/min/1.73 m{sup 2}/kidney had a sensitivity of 100 % and a specificity of 75 % to detect scintigraphy-split GFR less than 40 mL/min/1.73 m{sup 2}/kidney. Both techniques with their respective thresholds detected living donors with similar eGFR evolution during follow-up. Length-split GFR can be used to detect patients requiring scintigraphy. (orig.)

  18. Use of computed tomography assessed kidney length to predict split renal GFR in living kidney donors

    International Nuclear Information System (INIS)

    Gaillard, Francois; Fournier, Catherine; Leon, Carine; Legendre, Christophe; Pavlov, Patrik; Tissier, Anne-Marie; Correas, Jean-Michel; Harache, Benoit; Hignette, Chantal; Weinmann, Pierre; Eladari, Dominique; Timsit, Marc-Olivier; Mejean, Arnaud; Friedlander, Gerard; Courbebaisse, Marie; Houillier, Pascal

    2017-01-01

    Screening of living kidney donors may require scintigraphy to split glomerular filtration rate (GFR). To determine the usefulness of computed tomography (CT) to split GFR, we compared scintigraphy-split GFR to CT-split GFR. We evaluated CT-split GFR as a screening test to detect scintigraphy-split GFR lower than 40 mL/min/1.73 m"2/kidney. This was a monocentric retrospective study on 346 potential living donors who had GFR measurement, renal scintigraphy, and CT. We predicted GFR for each kidney by splitting GFR using the following formula: Volume-split GFR for a given kidney = measured GFR*[volume of this kidney/(volume of this kidney + volume of the opposite kidney)]. The same formula was used for length-split GFR. We compared length- and volume-split GFR to scintigraphy-split GFR at donation and with a 4-year follow-up. A better correlation was observed between length-split GFR and scintigraphy-split GFR (r = 0.92) than between volume-split GFR and scintigraphy-split GFR (r = 0.89). A length-split GFR threshold of 45 mL/min/1.73 m"2/kidney had a sensitivity of 100 % and a specificity of 75 % to detect scintigraphy-split GFR less than 40 mL/min/1.73 m"2/kidney. Both techniques with their respective thresholds detected living donors with similar eGFR evolution during follow-up. Length-split GFR can be used to detect patients requiring scintigraphy. (orig.)

  19. Robust de novo pathway enrichment with KeyPathwayMiner 5

    DEFF Research Database (Denmark)

    Alcaraz, Nicolas; List, Markus; Dissing-Hansen, Martin

    2016-01-01

    Identifying functional modules or novel active pathways, recently termed de novo pathway enrichment, is a computational systems biology challenge that has gained much attention during the last decade. Given a large biological interaction network, KeyPathwayMiner extracts connected subnetworks tha...

  20. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  1. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  2. Applying a computer-aided scheme to detect a new radiographic image marker for prediction of chemotherapy outcome

    International Nuclear Information System (INIS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin

    2016-01-01

    To investigate the feasibility of automated segmentation of visceral and subcutaneous fat areas from computed tomography (CT) images of ovarian cancer patients and applying the computed adiposity-related image features to predict chemotherapy outcome. A computerized image processing scheme was developed to segment visceral and subcutaneous fat areas, and compute adiposity-related image features. Then, logistic regression models were applied to analyze association between the scheme-generated assessment scores and progression-free survival (PFS) of patients using a leave-one-case-out cross-validation method and a dataset involving 32 patients. The correlation coefficients between automated and radiologist’s manual segmentation of visceral and subcutaneous fat areas were 0.76 and 0.89, respectively. The scheme-generated prediction scores using adiposity-related radiographic image features significantly associated with patients’ PFS (p < 0.01). Using a computerized scheme enables to more efficiently and robustly segment visceral and subcutaneous fat areas. The computed adiposity-related image features also have potential to improve accuracy in predicting chemotherapy outcome

  3. Prediction of 5-year overall survival in cervical cancer patients treated with radical hysterectomy using computational intelligence methods.

    Science.gov (United States)

    Obrzut, Bogdan; Kusy, Maciej; Semczuk, Andrzej; Obrzut, Marzanna; Kluska, Jacek

    2017-12-12

    Computational intelligence methods, including non-linear classification algorithms, can be used in medical research and practice as a decision making tool. This study aimed to evaluate the usefulness of artificial intelligence models for 5-year overall survival prediction in patients with cervical cancer treated by radical hysterectomy. The data set was collected from 102 patients with cervical cancer FIGO stage IA2-IIB, that underwent primary surgical treatment. Twenty-three demographic, tumor-related parameters and selected perioperative data of each patient were collected. The simulations involved six computational intelligence methods: the probabilistic neural network (PNN), multilayer perceptron network, gene expression programming classifier, support vector machines algorithm, radial basis function neural network and k-Means algorithm. The prediction ability of the models was determined based on the accuracy, sensitivity, specificity, as well as the area under the receiver operating characteristic curve. The results of the computational intelligence methods were compared with the results of linear regression analysis as a reference model. The best results were obtained by the PNN model. This neural network provided very high prediction ability with an accuracy of 0.892 and sensitivity of 0.975. The area under the receiver operating characteristics curve of PNN was also high, 0.818. The outcomes obtained by other classifiers were markedly worse. The PNN model is an effective tool for predicting 5-year overall survival in cervical cancer patients treated with radical hysterectomy.

  4. My4Sight: A Human Computation Platform for Improving Flu Predictions

    OpenAIRE

    Akupatni, Vivek Bharath

    2015-01-01

    While many human computation (human-in-the-loop) systems exist in the field of Artificial Intelligence (AI) to solve problems that can't be solved by computers alone, comparatively fewer platforms exist for collecting human knowledge, and evaluation of various techniques for harnessing human insights in improving forecasting models for infectious diseases, such as Influenza and Ebola. In this thesis, we present the design and implementation of My4Sight, a human computation system develope...

  5. Computer mouse use predicts acute pain but not prolonged or chronic pain in the neck and shoulder

    DEFF Research Database (Denmark)

    Andersen, Johan Hviid; Harhoff, Mette; Grimstrup, Søren

    2007-01-01

    BACKGROUND: Computer use is one of the commonest work place exposures in modern society. An adverse effect on musculoskeletal outcomes has been claimed for decades, mainly on the basis of self reports of exposure. The purpose of this study was to assess the risk of neck and shoulder pain associat...... psychosocial factors predicted the risk of prolonged pain. CONCLUSIONS: From the NUDATA-study we can conclude that most computer workers have no or minor neck and shoulder pain, few experience prolonged pain, and even fewer, chronic neck and shoulder pain....

  6. Novel prediction model of renal function after nephrectomy from automated renal volumetry with preoperative multidetector computed tomography (MDCT).

    Science.gov (United States)

    Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo

    2015-10-01

    The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p volumetry and clinical indices might yield an important tool for predicting postoperative renal function.

  7. Web Access to Digitised Content of the Exhibition Novo Mesto 1848-1918 at the Dolenjska Museum, Novo Mesto

    Directory of Open Access Journals (Sweden)

    Majda Pungerčar

    2013-09-01

    individually or in a group of visitors as a quick test at the end of a visit to the exhibition. The computer game Cray Fish Hunting on the River of Krka was tailored to young visitors. It described the geographical position of Novo mesto on the Krka peninsula and some objects on the riverside (wooden baths, the old bridge, the dam and the mill. Hunting noble cray-fish was very extensive in the Novo mesto region until the outburst of the cray-fish plague. It is said that up to 100,000 cay fish were caught annually and transported to all major towns of the then Austrian Empire. The original museum collections were preserved by scanning precious items which were kept in depots. Digitised materials accessible on the museum website met the needs of the majority of users and provided expanded access to the museum collection through presentation of graphic materials in virtual exhibition on the web which was much more comprehensive than a printed catalogue. There was also the possibility of remote access to the museum website which was very important for tourists and museum visitors with special needs. It was possible to design customer tailored presentations on specific themes (postcards from the period of national awakening, local societies, middle class garments etc.. Digital content was also available to school teachers to discuss the themes of interest with their pupils or students either before or after visiting the exhibition. There are many pitfalls and limitations of digitisation, the biggest two being staff deficiency and shrinking budget. Furthermore, there are difficulties with the technical support, gathering information on the implementation process, seeking partners to digitise materials etc. Besides, the question of long term digital archiving is of vital importance.

  8. Computed tomography angiography spot sign predicts intraprocedural aneurysm rupture in subarachnoid hemorrhage.

    Science.gov (United States)

    Burkhardt, Jan-Karl; Neidert, Marian Christoph; Stienen, Martin Nikolaus; Schöni, Daniel; Fung, Christian; Roethlisberger, Michel; Corniola, Marco Vincenzo; Bervini, David; Maduri, Rodolfo; Valsecchi, Daniele; Tok, Sina; Schatlo, Bawarjan; Bijlenga, Philippe; Schaller, Karl; Bozinov, Oliver; Regli, Luca

    2017-07-01

    To analyze whether the computed tomography angiography (CTA) spot sign predicts the intraprocedural rupture rate and outcome in patients with aneurysmal subarachnoid hemorrhage (aSAH). From a prospective nationwide multicenter registry database, 1023 patients with aneurysmal subarachnoid hemorrhage (aSAH) were analyzed retrospectively. Descriptive statistics and logistic regression analysis were used to compare spot sign-positive and -negative patients with aneurysmal intracerebral hemorrhage (aICH) for baseline characteristics, aneurysmal and ICH imaging characteristics, treatment and admission status as well as outcome at discharge and 1-year follow-up (1YFU) using the modified Rankin Scale (mRS). A total of 218 out of 1023 aSAH patients (21%) presented with aICH including 23/218 (11%) patients with spot sign. Baseline characteristics were comparable between spot sign-positive and -negative patients. There was a higher clip-to-coil ratio in patients with than without aICH (both spot sign positive and negative). Median aICH volume was significantly higher in the spot sign-positive group (50 ml, 13-223 ml) than in the spot sign-negative group (18 ml, 1-416; p spot sign-positive aICH thus were three times as likely as those with spot sign-negative aICH to show an intraoperative aneurysm rupture [odds ratio (OR) 3.04, 95% confidence interval (CI) 1.04-8.92, p = 0.046]. Spot sign-positive aICH patients showed a significantly worse mRS at discharge (p = 0.039) than patients with spot sign-negative aICH (median mRS 5 vs. 4). Logistic regression analysis showed that the spot sign was an aICH volume-dependent predictor for outcome. Both spot sign-positive and -negative aICH patients showed comparable rates of hospital death, death at 1YFU and mRS at 1YFU. In this multicenter data analysis, patients with spot sign-positive aICH showed higher aICH volumes and a higher rate of intraprocedural aneurysm rupture, but comparable long-term outcome to spot sign

  9. Development of a computer model to predict aortic rupture due to impact loading.

    Science.gov (United States)

    Shah, C S; Yang, K H; Hardy, W; Wang, H K; King, A I

    2001-11-01

    Aortic injuries during blunt thoracic impacts can lead to life threatening hemorrhagic shock and potential exsanguination. Experimental approaches designed to study the mechanism of aortic rupture such as the testing of cadavers is not only expensive and time consuming, but has also been relatively unsuccessful. The objective of this study was to develop a computer model and to use it to predict modes of loading that are most likely to produce aortic ruptures. Previously, a 3D finite element model of the human thorax was developed and validated against data obtained from lateral pendulum tests. The model included a detailed description of the heart, lungs, rib cage, sternum, spine, diaphragm, major blood vessels and intercostal muscles. However, the aorta was modeled as a hollow tube using shell elements with no fluid within, and its material properties were assumed to be linear and isotropic. In this study fluid elements representing blood have been incorporated into the model in order to simulate pressure changes inside the aorta due to impact. The current model was globally validated against experimental data published in the literature for both frontal and lateral pendulum impact tests. Simulations of the validated model for thoracic impacts from a number of directions indicate that the ligamentum arteriosum, subclavian artery, parietal pleura and pressure changes within the aorta are factors that could influence aortic rupture. The model suggests that a right-sided impact to the chest is potentially more hazardous with respect to aortic rupture than any other impact direction simulated in this study. The aortic isthmus was the most likely site of aortic rupture regardless of impact direction. The reader is cautioned that this model could only be validated on a global scale. Validation of the kinematics and dynamics of the aorta at the local level could not be done due to a lack of experimental data. It is hoped that this model will be used to design

  10. BOW. A computer code to predict lateral deflections of composite beams. A computer code to predict lateral deflections of composite beams

    Energy Technology Data Exchange (ETDEWEB)

    Tayal, M.

    1987-08-15

    Arrays of tubes are used in many engineered structures, such as in nuclear fuel bundles and in steam generators. The tubes can bend (bow) due to in-service temperatures and loads. Assessments of bowing of nuclear fuel elements can help demonstrate the integrity of fuel and of surrounding components, as a function of operating conditions such as channel power. The BOW code calculates the bending of composite beams such as fuel elements, due to gradients of temperature and due to hydraulic forces. The deflections and rotations are calculated in both lateral directions, for given conditions of temperatures. Wet and dry operation of the sheath can be simulated. Bow accounts for the following physical phenomena: circumferential and axial variations in the temperatures of the sheath and of the pellet; cracking of pellets; grip and slip between the pellets and the sheath; hydraulic drag; restraints from endplates, from neighbouring elements, and from the pressure-tube; gravity; concentric or eccentric welds between endcap and endplate; neutron flux gradients; and variations of material properties with temperature. The code is based on fundamental principles of mechanics. The governing equations are solved numerically using the finite element method. Several comparisons with closed-form equations show that the solutions of BOW are accurate. BOW`s predictions for initial in-reactor bow are also consistent with two post-irradiation measurements.

  11. DeNovoGUI: an open source graphical user interface for de novo sequencing of tandem mass spectra.

    Science.gov (United States)

    Muth, Thilo; Weilnböck, Lisa; Rapp, Erdmann; Huber, Christian G; Martens, Lennart; Vaudel, Marc; Barsnes, Harald

    2014-02-07

    De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com .

  12. Novel 3-D Computer Model Can Help Predict Pathogens’ Roles in Cancer | Poster

    Science.gov (United States)

    To understand how bacterial and viral infections contribute to human cancers, four NCI at Frederick scientists turned not to the lab bench, but to a computer. The team has created the world’s first—and currently, only—3-D computational approach for studying interactions between pathogen proteins and human proteins based on a molecular adaptation known as interface mimicry.

  13. Computational prediction of binding affinity for CYP1A2-ligand complexes using empirical free energy calculations

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Olsen, Lars; Jørgensen, Flemming Steen

    2010-01-01

    , and methods based on statistical mechanics. In the present investigation, we started from an LIE model to predict the binding free energy of structurally diverse compounds of cytochrome P450 1A2 ligands, one of the important human metabolizing isoforms of the cytochrome P450 family. The data set includes both...... substrates and inhibitors. It appears that the electrostatic contribution to the binding free energy becomes negligible in this particular protein and a simple empirical model was derived, based on a training set of eight compounds. The root mean square error for the training set was 3.7 kJ/mol. Subsequent......Predicting binding affinities for receptor-ligand complexes is still one of the challenging processes in computational structure-based ligand design. Many computational methods have been developed to achieve this goal, such as docking and scoring methods, the linear interaction energy (LIE) method...

  14. Combining on-chip synthesis of a focused combinatorial library with computational target prediction reveals imidazopyridine GPCR ligands.

    Science.gov (United States)

    Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Schneider, Gisbert

    2014-01-07

    Using the example of the Ugi three-component reaction we report a fast and efficient microfluidic-assisted entry into the imidazopyridine scaffold, where building block prioritization was coupled to a new computational method for predicting ligand-target associations. We identified an innovative GPCR-modulating combinatorial chemotype featuring ligand-efficient adenosine A1/2B and adrenergic α1A/B receptor antagonists. Our results suggest the tight integration of microfluidics-assisted synthesis with computer-based target prediction as a viable approach to rapidly generate bioactivity-focused combinatorial compound libraries with high success rates. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    Science.gov (United States)

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular

  16. Precise detection of de novo single nucleotide variants in human genomes.

    Science.gov (United States)

    Gómez-Romero, Laura; Palacios-Flores, Kim; Reyes, José; García, Delfino; Boege, Margareta; Dávila, Guillermo; Flores, Margarita; Schatz, Michael C; Palacios, Rafael

    2018-05-07

    The precise determination of de novo genetic variants has enormous implications across different fields of biology and medicine, particularly personalized medicine. Currently, de novo variations are identified by mapping sample reads from a parent-offspring trio to a reference genome, allowing for a certain degree of differences. While widely used, this approach often introduces false-positive (FP) results due to misaligned reads and mischaracterized sequencing errors. In a previous study, we developed an alternative approach to accurately identify single nucleotide variants (SNVs) using only perfect matches. However, this approach could be applied only to haploid regions of the genome and was computationally intensive. In this study, we present a unique approach, coverage-based single nucleotide variant identification (COBASI), which allows the exploration of the entire genome using second-generation short sequence reads without extensive computing requirements. COBASI identifies SNVs using changes in coverage of exactly matching unique substrings, and is particularly suited for pinpointing de novo SNVs. Unlike other approaches that require population frequencies across hundreds of samples to filter out any methodological biases, COBASI can be applied to detect de novo SNVs within isolated families. We demonstrate this capability through extensive simulation studies and by studying a parent-offspring trio we sequenced using short reads. Experimental validation of all 58 candidate de novo SNVs and a selection of non-de novo SNVs found in the trio confirmed zero FP calls. COBASI is available as open source at https://github.com/Laura-Gomez/COBASI for any researcher to use. Copyright © 2018 the Author(s). Published by PNAS.

  17. How Adverse Outcome Pathways Can Aid the Development and Use of Computational Prediction Models for Regulatory Toxicology.

    Science.gov (United States)

    Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M E Bette; Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M; Whelan, Maurice

    2017-02-01

    Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24-25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.

  18. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values

    OpenAIRE

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    Objective: The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. Materials and Methods: CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were g...

  19. An Efficient Computational Model to Predict Protonation at the Amide Nitrogen and Reactivity along the C–N Rotational Pathway

    Science.gov (United States)

    Szostak, Roman; Aubé, Jeffrey

    2015-01-01

    N-protonation of amides is critical in numerous biological processes, including amide bonds proteolysis and protein folding, as well as in organic synthesis as a method to activate amide bonds towards unconventional reactivity. A computational model enabling prediction of protonation at the amide bond nitrogen atom along the C–N rotational pathway is reported. Notably, this study provides a blueprint for the rational design and application of amides with a controlled degree of rotation in synthetic chemistry and biology. PMID:25766378

  20. CNN-PROMOTER, NEW CONSENSUS PROMOTER PREDICTION PROGRAM BASED ON NEURAL NETWORKS CNN-PROMOTER, NUEVO PROGRAMA PARA LA PREDICCIÓN DE PROMOTORES BASADO EN REDES NEURONALES CNN-PROMOTER, NOVO PROGRAMA PARA A PREDIÇÃO DE PROMOTORES BASEADO EM REDES NEURONAIS

    Directory of Open Access Journals (Sweden)

    Óscar Bedoya

    2011-06-01

    Full Text Available A new promoter prediction program called CNN-Promoter is presented. CNN-Promoter allows DNA sequences to be submitted and predicts them as promoter or non-promoter. Several methods have been developed to predict the promoter regions of genomes in eukaryotic organisms including algorithms based on Markov's models, decision trees, and statistical methods. Although there are plenty of programs proposed, there is still a need to improve the sensitivity and specificity values. In this paper, a new program is proposed; it is based on the consensus strategy of using experts to make a better prediction. The consensus strategy is developed by using neural networks. During the training process, the sensitivity and specificity were 100 % and during the test process the model reaches a sensitivity of 74.5 % and a specificity of 82.7 %.En este artículo se presenta un programa nuevo para la predicción de promotores llamado CNN-Promoter, que toma como entrada secuencias de ADN y las clasifica como promotor o no promotor. Se han desarrollado diversos métodos para predecir las regiones promotoras en organismos eucariotas, muchos de los cuales se basan en modelos de Markov, árboles de decisión y métodos estadísticos. A pesar de la variedad de programas existentes para la predicción de promotores, se necesita aún mejorar los valores de sensibilidad y especificidad. Se propone un nuevo programa que se basa en la estrategia de mezcla de expertos usando redes neuronales. Los resultados obtenidos en las pruebas alcanzan valores de sensibilidad y especificidad de 100 % en el entrenamiento y de 74,5 % de sensibilidad y 82,7 % de especificidad en los conjuntos de validación y prueba.Neste artigo a presenta-se um novo programa para a predição de promotores chamado CNN-Promoter, que toma como entrada sequências de DNA e as classifica como promotor ou não promotor. Desenvolveramse diversos métodos para predizer as regiões promotoras em organismos eucariotas

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  2. Prediction of sentinel lymph node status using single-photon emission computed tomography (SPECT)/computed tomography (CT) imaging of breast cancer.

    Science.gov (United States)

    Tomiguchi, Mai; Yamamoto-Ibusuki, Mutsuko; Yamamoto, Yutaka; Fujisue, Mamiko; Shiraishi, Shinya; Inao, Touko; Murakami, Kei-ichi; Honda, Yumi; Yamashita, Yasuyuki; Iyama, Ken-ichi; Iwase, Hirotaka

    2016-02-01

    Single-photon emission computed tomography (SPECT)/computed tomography (CT) improves the anatomical identification of sentinel lymph nodes (SNs). We aimed to evaluate the possibility of predicting the SN status using SPECT/CT. SN mapping using a SPECT/CT system was performed in 381 cases of clinically node-negative, operable invasive breast cancer. We evaluated and compared the values of SN mapping on SPECT/CT, the findings of other modalities and clinicopathological factors in predicting the SN status. Patients with SNs located in the Level I area were evaluated. Of the 355 lesions (94.8 %) assessed, six cases (1.6 %) were not detected using any imaging method. According to the final histological diagnosis, 298 lesions (78.2 %) were node negative and 83 lesions (21.7 %) were node positive. The univariate analysis showed that SN status was significantly correlated with the number of SNs detected on SPECT/CT in the Level I area (P = 0.0048), total number of SNs detected on SPECT/CT (P = 0.011), findings of planar lymphoscintigraphy (P = 0.011) and findings of a handheld gamma probe during surgery (P = 0.012). According to the multivariate analysis, the detection of multiple SNs on SPECT/CT imaging helped to predict SN metastasis. The number of SNs located in the Level I area detected using the SPECT/CT system may be a predictive factor for SN metastasis.

  3. How Adverse Outcome Pathways Can Aid the Development and Use of Computational Prediction Models for Regulatory Toxicology

    Energy Technology Data Exchange (ETDEWEB)

    Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J.; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M. E. (Bette); Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M.; Whelan, Maurice

    2016-12-19

    Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework has emerged as a systematic approach for organizing knowledge that supports such inference. We argue that this systematic organization of knowledge can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment.

  4. Applying a new computer-aided detection scheme generated imaging marker to predict short-term breast cancer risk

    Science.gov (United States)

    Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin

    2018-05-01

    This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC  =  0.65  ±  0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p  breast cancer risk.

  5. An outlook on robust model predictive control algorithms : Reflections on performance and computational aspects

    NARCIS (Netherlands)

    Saltik, M.B.; Özkan, L.; Ludlage, J.H.A.; Weiland, S.; Van den Hof, P.M.J.

    2018-01-01

    In this paper, we discuss the model predictive control algorithms that are tailored for uncertain systems. Robustness notions with respect to both deterministic (or set based) and stochastic uncertainties are discussed and contributions are reviewed in the model predictive control literature. We

  6. Predicting field weed emergence with empirical models and soft computing techniques

    Science.gov (United States)

    Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...

  7. A computation method for mass flowrate predictions in critical flows of initially subcooled liquid in long channels

    International Nuclear Information System (INIS)

    Celata, G.P.; D'Annibale, F.; Farello, G.E.

    1985-01-01

    It is suggested a fast and accurate computation method for the prediction of mass flowrate in critical flows initially subcooled liquid from ''long'' discharge channels (high LID values). Starting from a previous very simple correlation proposed by the authors, further improvements in the model enable to widen the method reliability up to initial saturation conditions. A comparison of computed values with 145 experimental data regarding several investigations carried out at the Heat Transfer Laboratory (TERM/ISP, ENEA Casaccia) shows an excellent agreement. The computed data shifting from experimental ones is within ±10% for almost all data, with a slight increase towards low inlet subcoolings. The average error, for all the considered data, is 4,6%

  8. Modular Engineering Concept at Novo Nordisk Engineering

    DEFF Research Database (Denmark)

    Moelgaard, Gert; Miller, Thomas Dedenroth

    1997-01-01

    This report describes the concept of a new engineering method at Novo Nordisk Engineering: Modular Engineering (ME). Three tools are designed to support project phases with different levels of detailing and abstraction. ME supports a standard, cross-functional breakdown of projects that facilitates...

  9. The limits of de novo DNA motif discovery.

    Directory of Open Access Journals (Sweden)

    David Simcha

    Full Text Available A major challenge in molecular biology is reverse-engineering the cis-regulatory logic that plays a major role in the control of gene expression. This program includes searching through DNA sequences to identify "motifs" that serve as the binding sites for transcription factors or, more generally, are predictive of gene expression across cellular conditions. Several approaches have been proposed for de novo motif discovery-searching sequences without prior knowledge of binding sites or nucleotide patterns. However, unbiased validation is not straightforward. We consider two approaches to unbiased validation of discovered motifs: testing the statistical significance of a motif using a DNA "background" sequence model to represent the null hypothesis and measuring performance in predicting membership in gene clusters. We demonstrate that the background models typically used are "too null," resulting in overly optimistic assessments of significance, and argue that performance in predicting TF binding or expression patterns from DNA motifs should be assessed by held-out data, as in predictive learning. Applying this criterion to common motif discovery methods resulted in universally poor performance, although there is a marked improvement when motifs are statistically significant against real background sequences. Moreover, on synthetic data where "ground truth" is known, discriminative performance of all algorithms is far below the theoretical upper bound, with pronounced "over-fitting" in training. A key conclusion from this work is that the failure of de novo discovery approaches to accurately identify motifs is basically due to statistical intractability resulting from the fixed size of co-regulated gene clusters, and thus such failures do not necessarily provide evidence that unfound motifs are not active biologically. Consequently, the use of prior knowledge to enhance motif discovery is not just advantageous but necessary. An implementation of

  10. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  12. De novo status epilepticus with isolated aphasia.

    Science.gov (United States)

    Flügel, Dominique; Kim, Olaf Chan-Hi; Felbecker, Ansgar; Tettenborn, Barbara

    2015-08-01

    Sudden onset of aphasia is usually due to stroke. Rapid diagnostic workup is necessary if reperfusion therapy is considered. Ictal aphasia is a rare condition but has to be excluded. Perfusion imaging may differentiate acute ischemia from other causes. In dubious cases, EEG is required but is time-consuming and laborious. We report a case where we considered de novo status epilepticus as a cause of aphasia without any lesion even at follow-up. A 62-year-old right-handed woman presented to the emergency department after nurses found her aphasic. She had undergone operative treatment of varicosis 3 days earlier. Apart from hypertension and obesity, no cardiovascular risk factors and no intake of medication other than paracetamol were reported. Neurological examination revealed global aphasia and right pronation in the upper extremity position test. Computed tomography with angiography and perfusion showed no abnormalities. Electroencephalogram performed after the CT scan showed left-sided slowing with high-voltage rhythmic 2/s delta waves but no clear ictal pattern. Intravenous lorazepam did improve EEG slightly, while aphasia did not change. Lumbar puncture was performed which likely excluded encephalitis. Magnetic resonance imaging showed cortical pathological diffusion imaging (restriction) and cortical hyperperfusion in the left parietal region. Intravenous anticonvulsant therapy under continuous EEG resolved neurological symptoms. The patient was kept on anticonvulsant therapy. Magnetic resonance imaging after 6 months showed no abnormalities along with no clinical abnormalities. Magnetic resonance imaging findings were only subtle, and EEG was without clear ictal pattern, so the diagnosis of aphasic status remains with some uncertainty. However, status epilepticus can mimic stroke symptoms and has to be considered in patients with aphasia even when no previous stroke or structural lesions are detectable and EEG shows no epileptic discharges. Epileptic origin is

  13. A network integration approach for drug-target interaction prediction and computational drug repositioning from heterogeneous information.

    Science.gov (United States)

    Luo, Yunan; Zhao, Xinbin; Zhou, Jingtian; Yang, Jinglin; Zhang, Yanqing; Kuang, Wenhua; Peng, Jian; Chen, Ligong; Zeng, Jianyang

    2017-09-18

    The emergence of large-scale genomic, chemical and pharmacological data provides new opportunities for drug discovery and repositioning. In this work, we develop a computational pipeline, called DTINet, to predict novel drug-target interactions from a constructed heterogeneous network, which integrates diverse drug-related information. DTINet focuses on learning a low-dimensional vector representation of features, which accurately explains the topological properties of individual nodes in the heterogeneous network, and then makes prediction based on these representations via a vector space projection scheme. DTINet achieves substantial performance improvement over other state-of-the-art methods for drug-target interaction prediction. Moreover, we experimentally validate the novel interactions between three drugs and the cyclooxygenase proteins predicted by DTINet, and demonstrate the new potential applications of these identified cyclooxygenase inhibitors in preventing inflammatory diseases. These results indicate that DTINet can provide a practically useful tool for integrating heterogeneous information to predict new drug-target interactions and repurpose existing drugs.Network-based data integration for drug-target prediction is a promising avenue for drug repositioning, but performance is wanting. Here, the authors introduce DTINet, whose performance is enhanced in the face of noisy, incomplete and high-dimensional biological data by learning low-dimensional vector representations.

  14. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values.

    Science.gov (United States)

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were generated. Based on the consensus of the two observers, 15.6% of sites were of low bone density, 47.9% were of intermediate density, and 36.5% were of high density. Receiver-operating characteristic analysis showed that CBCT intensity values had a high predictive power for predicting high density sites (area under the curve [AUC] =0.94, P < 0.005) and intermediate density sites (AUC = 0.81, P < 0.005). The best cut-off value for intensity to predict intermediate density sites was 218 (sensitivity = 0.77 and specificity = 0.76) and the best cut-off value for intensity to predict high density sites was 403 (sensitivity = 0.93 and specificity = 0.77). CBCT intensity values are considered useful for predicting bone density at posterior mandibular implant sites.

  15. Comparison of Computational Electromagnetic Codes for Prediction of Low-Frequency Radar Cross Section

    National Research Council Canada - National Science Library

    Lash, Paul C

    2006-01-01

    .... The goal of this research is to compare the capabilities of three computational electromagnetic codes for use in production of RCS signature assessments at low frequencies in terms of performance...

  16. Piv Method and Numerical Computation for Prediction of Liquid Steel Flow Structure in Tundish

    Directory of Open Access Journals (Sweden)

    Cwudziński A.

    2015-04-01

    Full Text Available This paper presents the results of computer simulations and laboratory experiments carried out to describe the motion of steel flow in the tundish. The facility under investigation is a single-nozzle tundish designed for casting concast slabs. For the validation of the numerical model and verification of the hydrodynamic conditions occurring in the examined tundish furniture variants, obtained from the computer simulations, a physical model of the tundish was employed. State-of-the-art vector flow field analysis measuring systems developed by Lavision were used in the laboratory tests. Computer simulations of liquid steel flow were performed using the commercial program Ansys-Fluent¯. In order to obtain a complete hydrodynamic picture in the tundish furniture variants tested, the computer simulations were performed for both isothermal and non-isothermal conditions.

  17. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Science.gov (United States)

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  18. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Directory of Open Access Journals (Sweden)

    WenBo Xiao

    Full Text Available In this article, we introduced an artificial neural network (ANN based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-, multi-crystalline (multi-, and amorphous (amor- crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  19. Predictor - Predictive Reaction Design via Informatics, Computation and Theories of Reactivity

    Science.gov (United States)

    2017-10-10

    Informatics, Computation and Theories of Reactivity Report Term: 0-Other Email : djtantillo@ucdavis.edu Distribution Statement: 1-Approved for public...Principal: Y Name: Dean J. Tantillo Email : djtantillo@ucdavis.edu RPPR Final Report as of 24-Nov-2017 Honors and Awards: Nothing to Report Protocol...meaningful queries is finding a balance between the amount of details in the metadata and computed results stored in the database vs. writing data

  20. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    Science.gov (United States)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  1. Potential carcinogenicity predicted by computational toxicity evaluation of thiophosphate pesticides using QSTR/QSCarciAR model.

    Science.gov (United States)

    Petrescu, Alina-Maria; Ilia, Gheorghe

    2017-07-01

    This study presents in silico prediction of toxic activities and carcinogenicity, represented by the potential carcinogenicity DSSTox/DBS, based on vector regression with a new Kernel activity, and correlating the predicted toxicity values through a QSAR model, namely: QSTR/QSCarciAR (quantitative structure toxicity relationship/quantitative structure carcinogenicity-activity relationship) described by 2D, 3D descriptors and biological descriptors. The results showed a connection between carcinogenicity (compared to the structure of a compound) and toxicity, as a basis for future studies on this subject, but each prediction is based on structurally similar compounds and the reactivation of the substructures of these compounds.

  2. PREDICTION OF DISTURBANCES IN VEHICLE CONTROL SYSTEMS BASED ON THE METHODS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    L. Lyubchik

    2009-01-01

    Full Text Available The problem of disturbances forecasting in vehicles control systems is considered in the given article. On the basis of nuclear campaign recurrence there have been obtained algorithms of identification and prediction of disturbances time series.

  3. Prediction of BP Reactivity to Talking Using Hybrid Soft Computing Approaches

    Directory of Open Access Journals (Sweden)

    Gurmanik Kaur

    2014-01-01

    Full Text Available High blood pressure (BP is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI, and arm circumference (AC were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA was fused with artificial neural network (ANN, adaptive neurofuzzy inference system (ANFIS, and least square-support vector machine (LS-SVM model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R2, root mean square error (RMSE, and mean absolute percentage error (MAPE revealed that PCA based LS-SVM (PCA-LS-SVM model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.

  4. Prediction of BP reactivity to talking using hybrid soft computing approaches.

    Science.gov (United States)

    Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar

    2014-01-01

    High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  6. De novo transcriptome assembly of Setatria italica variety Taejin

    Directory of Open Access Journals (Sweden)

    Yeonhwa Jo

    2016-06-01

    Full Text Available Foxtail millet (Setaria italica belonging to the family Poaceae is an important millet that is widely cultivated in East Asia. Of the cultivated millets, the foxtail millet has the longest history and is one of the main food crops in South India and China. Moreover, foxtail millet is a model plant system for biofuel generation utilizing the C4 photosynthetic pathway. In this study, we carried out de novo transcriptome assembly for the foxtail millet variety Taejin collected from Korea using next-generation sequencing. We obtained a total of 8.676 GB raw data by paired-end sequencing. The raw data in this study can be available in NCBI SRA database with accession number of SRR3406552. The Trinity program was used to de novo assemble 145,332 transcripts. Using the TransDecoder program, we predicted 82,925 putative proteins. BLASTP was performed against the Swiss-Prot protein sequence database to annotate the functions of identified proteins, resulting in 20,555 potentially novel proteins. Taken together, this study provides transcriptome data for the foxtail millet variety Taejin by RNA-Seq.

  7. De novo transcriptome assembly of the mycoheterotrophic plant Monotropa hypopitys

    Directory of Open Access Journals (Sweden)

    Alexey V. Beletsky

    2017-03-01

    Full Text Available Monotropa hypopitys (pinesap is a non-photosynthetic obligately mycoheterotrophic plant of the family Ericaceae. It obtains the carbon and other nutrients from the roots of surrounding autotrophic trees through the associated mycorrhizal fungi. In order to understand the evolutionary changes in the plant genome associated with transition to a heterotrophic lifestyle, we performed de novo transcriptomic analysis of M. hypopitys using next-generation sequencing. We obtained the RNA-Seq data from flowers, flower bracts and roots with haustoria using Illumina HiSeq2500 platform. The raw data obtained in this study can be available in NCBI SRA database with accession number of SRP069226. A total of 10.3 GB raw sequence data were obtained, corresponding to 103,357,809 raw reads. A total of 103,025,683 reads were filtered after removing low-quality reads and trimming the adapter sequences. The Trinity program was used to de novo assemble 98,349 unigens with an N50 of 1342 bp. Using the TransDecoder program, we predicted 43,505 putative proteins. 38,416 unigenes were annotated in the Swiss-Prot protein sequence database using BLASTX. The obtained transcriptomic data will be useful for further studies of the evolution of plant genomes upon transition to a non-photosynthetic lifestyle and the loss of photosynthesis-related functions.

  8. Permeability Surface of Deep Middle Cerebral Artery Territory on Computed Tomographic Perfusion Predicts Hemorrhagic Transformation After Stroke.

    Science.gov (United States)

    Li, Qiao; Gao, Xinyi; Yao, Zhenwei; Feng, Xiaoyuan; He, Huijin; Xue, Jing; Gao, Peiyi; Yang, Lumeng; Cheng, Xin; Chen, Weijian; Yang, Yunjun

    2017-09-01

    Permeability surface (PS) on computed tomographic perfusion reflects blood-brain barrier permeability and is related to hemorrhagic transformation (HT). HT of deep middle cerebral artery (MCA) territory can occur after recanalization of proximal large-vessel occlusion. We aimed to determine the relationship between HT and PS of deep MCA territory. We retrospectively reviewed 70 consecutive acute ischemic stroke patients presenting with occlusion of the distal internal carotid artery or M1 segment of the MCA. All patients underwent computed tomographic perfusion within 6 hours after symptom onset. Computed tomographic perfusion data were postprocessed to generate maps of different perfusion parameters. Risk factors were identified for increased deep MCA territory PS. Receiver operating characteristic curve analysis was performed to calculate the optimal PS threshold to predict HT of deep MCA territory. Increased PS was associated with HT of deep MCA territory. After adjustments for age, sex, onset time to computed tomographic perfusion, and baseline National Institutes of Health Stroke Scale, poor collateral status (odds ratio, 7.8; 95% confidence interval, 1.67-37.14; P =0.009) and proximal MCA-M1 occlusion (odds ratio, 4.12; 95% confidence interval, 1.03-16.52; P =0.045) were independently associated with increased deep MCA territory PS. Relative PS most accurately predicted HT of deep MCA territory (area under curve, 0.94; optimal threshold, 2.89). Increased PS can predict HT of deep MCA territory after recanalization therapy for cerebral proximal large-vessel occlusion. Proximal MCA-M1 complete occlusion and distal internal carotid artery occlusion in conjunction with poor collaterals elevate deep MCA territory PS. © 2017 American Heart Association, Inc.

  9. Using Combined Computational Techniques to Predict the Glass Transition Temperatures of Aromatic Polybenzoxazines

    Science.gov (United States)

    Mhlanga, Phumzile; Wan Hassan, Wan Aminah; Hamerton, Ian; Howlin, Brendan J.

    2013-01-01

    The Molecular Operating Environment software (MOE) is used to construct a series of benzoxazine monomers for which a variety of parameters relating to the structures (e.g. water accessible surface area, negative van der Waals surface area, hydrophobic volume and the sum of atomic polarizabilities, etc.) are obtained and quantitative structure property relationships (QSPR) models are formulated. Three QSPR models (formulated using up to 5 descriptors) are first used to make predictions for the initiator data set (n = 9) and compared to published thermal data; in all of the QSPR models there is a high level of agreement between the actual data and the predicted data (within 0.63–1.86 K of the entire dataset). The water accessible surface area is found to be the most important descriptor in the prediction of Tg. Molecular modelling simulations of the benzoxazine polymer (minus initiator) carried out at the same time using the Materials Studio software suite provide an independent prediction of Tg. Predicted Tg values from molecular modelling fall in the middle of the range of the experimentally determined Tg values, indicating that the structure of the network is influenced by the nature of the initiator used. Hence both techniques can provide predictions of glass transition temperatures and provide complementary data for polymer design. PMID:23326419

  10. Using combined computational techniques to predict the glass transition temperatures of aromatic polybenzoxazines.

    Directory of Open Access Journals (Sweden)

    Phumzile Mhlanga

    Full Text Available The Molecular Operating Environment software (MOE is used to construct a series of benzoxazine monomers for which a variety of parameters relating to the structures (e.g. water accessible surface area, negative van der Waals surface area, hydrophobic volume and the sum of atomic polarizabilities, etc. are obtained and quantitative structure property relationships (QSPR models are formulated. Three QSPR models (formulated using up to 5 descriptors are first used to make predictions for the initiator data set (n = 9 and compared to published thermal data; in all of the QSPR models there is a high level of agreement between the actual data and the predicted data (within 0.63-1.86 K of the entire dataset. The water accessible surface area is found to be the most important descriptor in the prediction of T(g. Molecular modelling simulations of the benzoxazine polymer (minus initiator carried out at the same time using the Materials Studio software suite provide an independent prediction of T(g. Predicted T(g values from molecular modelling fall in the middle of the range of the experimentally determined T(g values, indicating that the structure of the network is influenced by the nature of the initiator used. Hence both techniques can provide predictions of glass transition temperatures and provide complementary data for polymer design.

  11. Computer-aided global breast MR image feature analysis for prediction of tumor response to chemotherapy: performance assessment

    Science.gov (United States)

    Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel

    2016-03-01

    Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.

  12. Novel application of quantitative single-photon emission computed-tomography/computed tomography to predict early response to methimazole in Graves' disease

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Joo; Bang, Ji In; Kim, Ji Young; Moon, Jae Hoon [Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam (Korea, Republic of); So, Young [Dept. of Nuclear Medicine, Konkuk University Medical Center, Seoul (Korea, Republic of); Lee, Won Woo [Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul (Korea, Republic of)

    2017-06-15

    Since Graves' disease (GD) is resistant to antithyroid drugs (ATDs), an accurate quantitative thyroid function measurement is required for the prediction of early responses to ATD. Quantitative parameters derived from the novel technology, single-photon emission computed tomography/computed tomography (SPECT/CT), were investigated for the prediction of achievement of euthyroidism after methimazole (MMI) treatment in GD. A total of 36 GD patients (10 males, 26 females; mean age, 45.3 ± 13.8 years) were enrolled for this study, from April 2015 to January 2016. They underwent quantitative thyroid SPECT/CT 20 minutes post-injection of {sup 99m}Tc-pertechnetate (5 mCi). Association between the time to biochemical euthyroidism after MMI treatment and uptake, standardized uptake value (SUV), functional thyroid mass (SUVmean × thyroid volume) from the SPECT/CT, and clinical/biochemical variables, were investigated. GD patients had a significantly greater %uptake (6.9 ± 6.4%) than historical control euthyroid patients (n = 20, 0.8 ± 0.5%, p < 0.001) from the same quantitative SPECT/CT protocol. Euthyroidism was achieved in 14 patients at 156 ± 62 days post-MMI treatment, but 22 patients had still not achieved euthyroidism by the last follow-up time-point (208 ± 80 days). In the univariate Cox regression analysis, the initial MMI dose (p = 0.014), %uptake (p = 0.015), and functional thyroid mass (p = 0.016) were significant predictors of euthyroidism in response to MMI treatment. However, only uptake remained significant in a multivariate Cox regression analysis (p = 0.034). A uptake cutoff of 5.0% dichotomized the faster responding versus the slower responding GD patients (p = 0.006). A novel parameter of thyroid uptake from quantitative SPECT/CT is a predictive indicator of an early response to MMI in GD patients.

  13. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort

    Directory of Open Access Journals (Sweden)

    Eliana Vassena

    2017-06-01

    Full Text Available In the last two decades the anterior cingulate cortex (ACC has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  14. Computer model for predicting the effect of inherited sterility on population growth

    International Nuclear Information System (INIS)

    Carpenter, J.E.; Layton, R.C.

    1993-01-01

    A Fortran based computer program was developed to facilitate modelling different inherited sterility data sets under various paradigms. The model was designed to allow variable input for several different parameters, such as rate of increase per generation, release ratio and initial population levels, reproductive rates and sex ratios resulting from different matings, and the number of nights a female is active in mating and oviposition. The model and computer program should be valuable tools for recognizing areas in which information is lacking and for identifying the effect that different parameters can have on the efficacy of the inherited sterility method. (author). 8 refs, 4 figs

  15. Added value of delayed computed tomography angiography in primary intracranial hemorrhage and hematoma size for predicting spot sign.

    Science.gov (United States)

    Wu, Te Chang; Chen, Tai Yuan; Shiue, Yow Ling; Chen, Jeon Hor; Hsieh, Tsyh-Jyi; Ko, Ching Chung; Lin, Ching Po

    2018-04-01

    Background The computed tomography angiography (CTA) spot sign represents active contrast extravasation within acute primary intracerebral hemorrhage (ICH) and is an independent predictor of hematoma expansion (HE) and poor clinical outcomes. The spot sign could be detected on first-pass CTA (fpCTA) or delayed CTA (dCTA). Purpose To investigate the additional benefits of dCTA spot sign in primary ICH and hematoma size for predicting spot sign. Material and Methods This is a retrospective study of 100 patients who underwent non-contrast CT (NCCT) and CTA within 24 h of onset of primary ICH. The presence of spot sign on fpCTA or dCTA, and hematoma size on NCCT were recorded. The spot sign on fpCTA or dCTA for predicting significant HE, in-hospital mortality, and poor clinical outcomes (mRS ≥ 4) are calculated. The hematoma size for prediction of CTA spot sign was also analyzed. Results Only the spot sign on dCTA could predict high risk of significant HE and poor clinical outcomes as on fpCTA ( P spot sign on fpCTA or dCTA in the absence of intraventricular and subarachnoid hemorrhage. Conclusion This study clarifies that dCTA imaging could improve predictive performance of CTA in primary ICH. Furthermore, the XY value is the best predictor for CTA spot sign.

  16. Computer models versus reality: how well do in silico models currently predict the sensitization potential of a substance.

    Science.gov (United States)

    Teubner, Wera; Mehling, Anette; Schuster, Paul Xaver; Guth, Katharina; Worth, Andrew; Burton, Julien; van Ravenzwaay, Bennard; Landsiedel, Robert

    2013-12-01

    National legislations for the assessment of the skin sensitization potential of chemicals are increasingly based on the globally harmonized system (GHS). In this study, experimental data on 55 non-sensitizing and 45 sensitizing chemicals were evaluated according to GHS criteria and used to test the performance of computer (in silico) models for the prediction of skin sensitization. Statistic models (Vega, Case Ultra, TOPKAT), mechanistic models (Toxtree, OECD (Q)SAR toolbox, DEREK) or a hybrid model (TIMES-SS) were evaluated. Between three and nine of the substances evaluated were found in the individual training sets of various models. Mechanism based models performed better than statistical models and gave better predictivities depending on the stringency of the domain definition. Best performance was achieved by TIMES-SS, with a perfect prediction, whereby only 16% of the substances were within its reliability domain. Some models offer modules for potency; however predictions did not correlate well with the GHS sensitization subcategory derived from the experimental data. In conclusion, although mechanistic models can be used to a certain degree under well-defined conditions, at the present, the in silico models are not sufficiently accurate for broad application to predict skin sensitization potentials. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-11-23

    Motivation Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. Results We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using five repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new, and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  18. Crystal engineering of ibuprofen compounds: From molecule to crystal structure to morphology prediction by computational simulation and experimental study

    Science.gov (United States)

    Zhang, Min; Liang, Zuozhong; Wu, Fei; Chen, Jian-Feng; Xue, Chunyu; Zhao, Hong

    2017-06-01

    We selected the crystal structures of ibuprofen with seven common space groups (Cc, P21/c, P212121, P21, Pbca, Pna21, and Pbcn), which was generated from ibuprofen molecule by molecular simulation. The predicted crystal structures of ibuprofen with space group P21/c has the lowest total energy and the largest density, which is nearly indistinguishable with experimental result. In addition, the XRD patterns for predicted crystal structure are highly consistent with recrystallization from solvent of ibuprofen. That indicates that the simulation can accurately predict the crystal structure of ibuprofen from the molecule. Furthermore, based on this crystal structure, we predicted the crystal habit in vacuum using the attachment energy (AE) method and considered solvent effects in a systematic way using the modified attachment energy (MAE) model. The simulation can accurately construct a complete process from molecule to crystal structure to morphology prediction. Experimentally, we observed crystal morphologies in four different polarity solvents compounds (ethanol, acetonitrile, ethyl acetate, and toluene). We found that the aspect ratio decreases of crystal habits in this ibuprofen system were found to vary with increasing solvent relative polarity. Besides, the modified crystal morphologies are in good agreement with the observed experimental morphologies. Finally, this work may guide computer-aided design of the desirable crystal morphology.

  19. Quantitative analysis and prediction of regional lymph node status in rectal cancer based on computed tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Chunyan; Liu, Lizhi; Li, Li [Sun Yat-sen University, State Key Laboratory of Oncology in Southern China, Imaging Diagnosis and Interventional Center, Cancer Center, Guangzhou, Guangdong (China); Cai, Hongmin; Tian, Haiying [Sun Yat-Sen University, Department of Automation, School of Science Information and Technology, Guangzhou (China); Li, Liren [Sun Yat-sen University, State Key Laboratory of Oncology in Southern China, Department of Abdominal (colon and rectal) Surgery, Cancer Center, Guangzhou (China)

    2011-11-15

    To quantitatively evaluate regional lymph nodes in rectal cancer patients by using an automated, computer-aided approach, and to assess the accuracy of this approach in differentiating benign and malignant lymph nodes. Patients (228) with newly diagnosed rectal cancer, confirmed by biopsy, underwent enhanced computed tomography (CT). Patients were assigned to the benign node or malignant node group according to histopathological analysis of node samples. All CT-detected lymph nodes were segmented using the edge detection method, and seven quantitative parameters of each node were measured. To increase the prediction accuracy, a hierarchical model combining the merits of the support and relevance vector machines was proposed to achieve higher performance. Of the 220 lymph nodes evaluated, 125 were positive and 95 were negative for metastases. Fractal dimension obtained by the Minkowski box-counting approach was higher in malignant nodes than in benign nodes, and there was a significant difference in heterogeneity between metastatic and non-metastatic lymph nodes. The overall performance of the proposed model is shown to have accuracy as high as 88% using morphological characterisation of lymph nodes. Computer-aided quantitative analysis can improve the prediction of node status in rectal cancer. (orig.)

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  2. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  3. Recent advances, and unresolved issues, in the application of computational modelling to the prediction of the biological effects of nanomaterials

    International Nuclear Information System (INIS)

    Winkler, David A.

    2016-01-01

    Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based, have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.

  4. Computational Prediction and Rationalization, and Experimental Validation of Handedness Induction in Helical Aromatic Oligoamide Foldamers.

    Science.gov (United States)

    Liu, Zhiwei; Hu, Xiaobo; Abramyan, Ara M; Mészáros, Ádám; Csékei, Márton; Kotschy, András; Huc, Ivan; Pophristic, Vojislava

    2017-03-13

    Metadynamics simulations were used to describe the conformational energy landscapes of several helically folded aromatic quinoline carboxamide oligomers bearing a single chiral group at either the C or N terminus. The calculations allowed the prediction of whether a helix handedness bias occurs under the influence of the chiral group and gave insight into the interactions (sterics, electrostatics, hydrogen bonds) responsible for a particular helix sense preference. In the case of camphanyl-based and morpholine-based chiral groups, experimental data confirming the validity of the calculations were already available. New chiral groups with a proline residue were also investigated and were predicted to induce handedness. This prediction was verified experimentally through the synthesis of proline-containing monomers, their incorporation into an oligoamide sequence by solid phase synthesis and the investigation of handedness induction by NMR spectroscopy and circular dichroism. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A Computer Aided System for Correlation and Prediction of Phase Equilibrium Data

    DEFF Research Database (Denmark)

    Nielsen, T.L.; Gani, Rafiqul

    2001-01-01

    based on mathematical programming. This paper describes the development of a computer aided system for the systematic derivation of appropriate property models to be used in the service role for a specified problem. As a first step, a library of well-known property models ha's been developed...

  6. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  7. The boat hull model : adapting the roofline model to enable performance prediction for parallel computing

    NARCIS (Netherlands)

    Nugteren, C.; Corporaal, H.

    2012-01-01

    Multi-core and many-core were already major trends for the past six years, and are expected to continue for the next decades. With these trends of parallel computing, it becomes increasingly difficult to decide on which architecture to run a given application. In this work, we use an algorithm

  8. The boat hull model : enabling performance prediction for parallel computing prior to code development

    NARCIS (Netherlands)

    Nugteren, C.; Corporaal, H.

    2012-01-01

    Multi-core and many-core were already major trends for the past six years and are expected to continue for the next decade. With these trends of parallel computing, it becomes increasingly difficult to decide on which processor to run a given application, mainly because the programming of these

  9. Computational prediction of neoantigens: do we need more data or new approaches?

    DEFF Research Database (Denmark)

    Eklund, Aron Charles; Szallasi, Zoltan Imre

    2018-01-01

    Personalized cancer immunotherapy may benefit from improved computational algorithms for identifying neoantigens. Recent results demonstrate that machine learning can improve accuracy. Additional improvements may require more genomic data paired with in vitro T cell reactivity measurements......, and more sophisticated algorithms that take into account T cell receptor specificity....

  10. Predicting the usefulness of therapeutic drug monitoring of mycophenolic acid: a computer simulation

    NARCIS (Netherlands)

    van Hest, Reinier; Mathot, Ron; Vulto, Arnold; Weimar, Willem; van Gelder, Teun

    2005-01-01

    The usefulness of therapeutic drug monitoring (TDM) of mycophenolate mofetil (MMF) was investigated with a computer simulation model. For a fixed-dose (FD) and a concentration-controlled (CC) MMF dosing regimen exposure to mycophenolic acid (MPA) was compared. A nonlinear mixed-effects model

  11. Use of Standardized Test Scores to Predict Success in a Computer Applications Course

    Science.gov (United States)

    Harris, Robert V.; King, Stephanie B.

    2016-01-01

    The purpose of this study was to see if a relationship existed between American College Testing (ACT) scores (i.e., English, reading, mathematics, science reasoning, and composite) and student success in a computer applications course at a Mississippi community college. The study showed that while the ACT scores were excellent predictors of…

  12. A heuristic model for computational prediction of human branch point sequence.

    Science.gov (United States)

    Wen, Jia; Wang, Jue; Zhang, Qing; Guo, Dianjing

    2017-10-24

    Pre-mRNA splicing is the removal of introns from precursor mRNAs (pre-mRNAs) and the concurrent ligation of the flanking exons to generate mature mRNA. This process is catalyzed by the spliceosome, where the splicing factor 1 (SF1) specifically recognizes the seven-nucleotide branch point sequence (BPS) and the U2 snRNP later displaces the SF1 and binds to the BPS. In mammals, the degeneracy of BPS motifs together with the lack of a large set of experimentally verified BPSs complicates the task of BPS prediction in silico. In this paper, we develop a simple and yet efficient heuristic model for human BPS prediction based on a novel scoring scheme, which quantifies the splicing strength of putative BPSs. The candidate BPS is restricted exclusively within a defined BPS search region to avoid the influences of other elements in the intron and therefore the prediction accuracy is improved. Moreover, using two types of relative frequencies for human BPS prediction, we demonstrate our model outperformed other current implementations on experimentally verified human introns. We propose that the binding energy contributes to the molecular recognition involved in human pre-mRNA splicing. In addition, a genome-wide human BPS prediction is carried out. The characteristics of predicted BPSs are in accordance with experimentally verified human BPSs, and branch site positions relative to the 3'ss and the 5'end of the shortened AGEZ are consistent with the results of published papers. Meanwhile, a webserver for BPS predictor is freely available at http://biocomputer.bio.cuhk.edu.hk/BPS .

  13. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  14. Prediction of the thickness of the compensator filter in radiation therapy using computational intelligence

    International Nuclear Information System (INIS)

    Dehlaghi, Vahab; Taghipour, Mostafa; Haghparast, Abbas; Roshani, Gholam Hossein; Rezaei, Abbas; Shayesteh, Sajjad Pashootan; Adineh-Vand, Ayoub; Karimi, Gholam Reza

    2015-01-01

    In this study, artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS) are investigated to predict the thickness of the compensator filter in radiation therapy. In the proposed models, the input parameters are field size (S), off-axis distance, and relative dose (D/D 0 ), and the output is the thickness of the compensator. The obtained results show that the proposed ANN and ANFIS models are useful, reliable, and cheap tools to predict the thickness of the compensator filter in intensity-modulated radiation therapy

  15. Fast computational methods for predicting protein structure from primary amino acid sequence

    Science.gov (United States)

    Agarwal, Pratul Kumar [Knoxville, TN

    2011-07-19

    The present invention provides a method utilizing primary amino acid sequence of a protein, energy minimization, molecular dynamics and protein vibrational modes to predict three-dimensional structure of a protein. The present invention also determines possible intermediates in the protein folding pathway. The present invention has important applications to the design of novel drugs as well as protein engineering. The present invention predicts the three-dimensional structure of a protein independent of size of the protein, overcoming a significant limitation in the prior art.

  16. Prediction of the thickness of the compensator filter in radiation therapy using computational intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Dehlaghi, Vahab; Taghipour, Mostafa; Haghparast, Abbas [Department of Biomedical Engineering, Kermanshah University of Medical Sciences, Kermanshah (Iran, Islamic Republic of); Roshani, Gholam Hossein [School of Energy, Kermanshah University of Technology, Kermanshah (Iran, Islamic Republic of); Rezaei, Abbas [Department of Electrical Engineering, Kermanshah University of Technology, Kermanshah (Iran, Islamic Republic of); Shayesteh, Sajjad Pashootan [Department of Biomedical Engineering, Kermanshah University of Medical Sciences, Kermanshah (Iran, Islamic Republic of); Adineh-Vand, Ayoub [Department of Computer Engineering, Islamic Azad University, Kermanshah (Iran, Islamic Republic of); Department of Electrical Engineering, Razi University, Kermanshah (Iran, Islamic Republic of); Karimi, Gholam Reza, E-mail: ghkarimi@razi.ac.ir [Department of Electrical Engineering, Razi University, Kermanshah (Iran, Islamic Republic of)

    2015-04-01

    In this study, artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS) are investigated to predict the thickness of the compensator filter in radiation therapy. In the proposed models, the input parameters are field size (S), off-axis distance, and relative dose (D/D{sub 0}), and the output is the thickness of the compensator. The obtained results show that the proposed ANN and ANFIS models are useful, reliable, and cheap tools to predict the thickness of the compensator filter in intensity-modulated radiation therapy.

  17. Developing a computer-controlled simulated digestion system to predict the concentration of metabolizable energy of feedstuffs for rooster.

    Science.gov (United States)

    Zhao, F; Ren, L Q; Mi, B M; Tan, H Z; Zhao, J T; Li, H; Zhang, H F; Zhang, Z Y

    2014-04-01

    Four experiments were conducted to evaluate the effectiveness of a computer-controlled simulated digestion system (CCSDS) for predicting apparent metabolizable energy (AME) and true metabolizable energy (TME) using in vitro digestible energy (IVDE) content of feeds for roosters. In Exp. 1, the repeatability of the IVDE assay was tested in corn, wheat, rapeseed meal, and cottonseed meal with 3 assays of each sample and each with 5 replicates of the same sample. In Exp. 2, the additivity of IVDE concentration in corn, soybean meal, and cottonseed meal was tested by comparing determined IVDE values of the complete diet with values predicted from measurements on individual ingredients. In Exp. 3, linear models to predict AME and TME based on IVDE were developed with 16 calibration samples. In Exp. 4, the accuracy of prediction models was tested by the differences between predicted and determined values for AME or TME of 6 ingredients and 4 diets. In Exp. 1, the mean CV of IVDE was 0.88% (range = 0.20 to 2.14%) for corn, wheat, rapeseed meal, and cottonseed meal. No difference in IVDE was observed between 3 assays of an ingredient, indicating that the IVDE assay is repeatable under these conditions. In Exp. 2, minimal differences (<21 kcal/kg) were observed between determined and calculated IVDE of 3 complete diets formulated with corn, soybean meal, and cottonseed meal, demonstrating that the IVDE values are additive in a complete diet. In Exp. 3, linear relationships between AME and IVDE and between TME and IVDE were observed in 16 calibration samples: AME = 1.062 × IVDE - 530 (R(2) = 0.97, residual standard deviation [RSD] = 146 kcal/kg, P < 0.001) and TME = 1.050 × IVDE - 16 (R(2) = 0.97, RSD = 148 kcal/kg, P < 0.001). Differences of less than 100 kcal/kg were observed between determined and predicted values in 10 and 9 of the 16 calibration samples for AME and TME, respectively. In Exp. 4, differences of less than 100 kcal/kg between determined and predicted

  18. Wegener's granulomatosis occurring de novo during pregnancy.

    Science.gov (United States)

    Alfhaily, F; Watts, R; Leather, A

    2009-01-01

    Wegener's granulomatosis (WG) is rarely diagnosed during the reproductive years and uncommonly manifests for the first time during pregnancy. We report a case of de novo WG presenting at 30 weeks gestation with classical symptoms of WG (ENT, pulmonary). The diagnosis was confirmed by radiological, laboratory, and histological investigations. With a multidisciplinary approach, she had a successful vaginal delivery of a healthy baby. She was treated successfully by a combination of steroids, azathioprine and intravenous immunoglobulin in the active phase of disease for induction of remission and by azathioprine and steroids for maintenance of remission. The significant improvement in her symptoms allowed us to continue her pregnancy to 37 weeks when delivery was electively induced. Transplacental transmission of PR3-ANCA occurred but the neonate remained well. This case of de novo WG during pregnancy highlights the seriousness of this disease and the challenge in management of such patients.

  19. Surgical outcome prediction in patients with advanced ovarian cancer using computed tomography scans and intraoperative findings

    Directory of Open Access Journals (Sweden)

    Ha-Jeong Kim

    2014-09-01

    Conclusion: The combination of omental extension to the stomach or spleen and involvement of inguinal or pelvic lymph nodes in preoperative CT scans is considered predictive of suboptimal cytoreduction. These patients may be more appropriately treated with neoadjuvant chemotherapy followed by surgical cytoreduction.

  20. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  1. Computer aided simulation for developing a simple model to predict cooling of packaged foods

    DEFF Research Database (Denmark)

    Christensen, Martin Gram; Feyissa, Aberham Hailu; Adler-Nissen, Jens

    A new equation to predict equilibrium temperatures for cooling operations of packaged foods has been deducted from the traditional 1st order solution to Fourier’s heat transfer equations. The equation is analytical in form and only requires measurable parameters, in form of area vs. volume ratio (A...

  2. The Nature and Use of Prediction Skills in a Biological Computer Simulation.

    Science.gov (United States)

    Lavoie, Derrick R.; Good, Ron

    1988-01-01

    Describes mechanisms of thought associated with making predictions. Concludes that successful predictors had high initial knowledge of the subject matter and were formally operational. Unsuccessful predictors had low initial knowledge and were concretely operational. Systematic manipulation, note taking, and higher-level thinking skills were…

  3. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

    International Nuclear Information System (INIS)

    Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

    2015-01-01

    Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

  4. Computer simulation for prediction of performance and thermodynamic parameters of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Talawar, M.B.; Asthana, S.N.

    2004-01-01

    A new code viz., Linear Output Thermodynamic User-friendly Software for Energetic Systems (LOTUSES) developed during this work predicts the theoretical performance parameters such as density, detonation factor, velocity of detonation, detonation pressure and thermodynamic properties such as heat of detonation, heat of explosion, volume of explosion gaseous products. The same code also assists in the prediction of possible explosive decomposition products after explosion and power index. The developed code has been validated by calculating the parameters of standard explosives such as TNT, PETN, RDX, and HMX. Theoretically predicated parameters are accurate to the order of ±5% deviation. To the best of our knowledge, no such code is reported in literature which can predict a wide range of characteristics of known/unknown explosives with minimum input parameters. The code can be used to obtain thermochemical and performance parameters of high energy materials (HEMs) with reasonable accuracy. The code has been developed in Visual Basic having enhanced windows environment, and thereby advantages over the conventional codes, written in Fortran. The theoretically predicted HEMs performance can be directly printed as well as stored in text (.txt) or HTML (.htm) or Microsoft Word (.doc) or Adobe Acrobat (.pdf) format in the hard disk. The output can also be copied into the Random Access Memory as clipboard text which can be imported/pasted in other software as in the case of other codes

  5. Computation of piecewise affine terminal cost functions for model predictive control

    NARCIS (Netherlands)

    Brunner, F.D.; Lazar, M.; Allgöwer, F.; Fränzle, Martin; Lygeros, John

    2014-01-01

    This paper proposes a method for the construction of piecewise affine terminal cost functions for model predictive control (MPC). The terminal cost function is constructed on a predefined partition by solving a linear program for a given piecewise affine system, a stabilizing piecewise affine

  6. Transcription-based prediction of response to IFNbeta using supervised computational methods.

    Directory of Open Access Journals (Sweden)

    Sergio E Baranzini

    2005-01-01

    Full Text Available Changes in cellular functions in response to drug therapy are mediated by specific transcriptional profiles resulting from the induction or repression in the activity of a number of genes, thereby modifying the preexisting gene activity pattern of the drug-targeted cell(s. Recombinant human interferon beta (rIFNbeta is routinely used to control exacerbations in multiple sclerosis patients with only partial success, mainly because of adverse effects and a relatively large proportion of nonresponders. We applied advanced data-mining and predictive modeling tools to a longitudinal 70-gene expression dataset generated by kinetic reverse-transcription PCR from 52 multiple sclerosis patients treated with rIFNbeta to discover higher-order predictive patterns associated with treatment outcome and to define the molecular footprint that rIFNbeta engraves on peripheral blood mononuclear cells. We identified nine sets of gene triplets whose expression, when tested before the initiation of therapy, can predict the response to interferon beta with up to 86% accuracy. In addition, time-series analysis revealed potential key players involved in a good or poor response to interferon beta. Statistical testing of a random outcome class and tolerance to noise was carried out to establish the robustness of the predictive models. Large-scale kinetic reverse-transcription PCR, coupled with advanced data-mining efforts, can effectively reveal preexisting and drug-induced gene expression signatures associated with therapeutic effects.

  7. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  8. CompNanoTox2015: novel perspectives from a European conference on computational nanotoxicology on predictive nanotoxicology

    Energy Technology Data Exchange (ETDEWEB)

    Bañares, Miguel A. [Instituto de Catalisis y Petroleoquimica, ICP, CSIC, Madrid, Spain,; Haase, Andrea [German Federal Institute for Risk Assessment (BfR), Department of Chemical and Product Safety, Berlin, Germany,; Tran, Lang [Statistics and Toxicology Section, Institute of Occupational Medicine, Edinburgh, UK,; Lobaskin, Vladimir [School of Physics, University College Dublin, Dublin, Ireland,; Oberdörster, Günter [Department of Environmental Medicine, University of Rochester, Rochester, NY, USA,; Rallo, Robert [Departament d’Enginyeria Informatica i Matematiques, Universitat Rovira i Virgili, Tarragona, Spain,; Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, Richland, WA, USA,; Leszczynski, Jerzy [Interdisciplinary Nanotoxicity Center, Jackson State University, Jackson, MS, USA,; Hoet, Peter [Public Health, Unit Lung Toxicology, K. U. Leuven, Leuven, Belgium,; Korenstein, Rafi [Faculty of Medicine, Tel-Aviv University, Tel Aviv, Israel,; Hardy, Barry [Technology Park Basel, Douglas Connect GmbH, Basel, Switzerland,; Puzyn, Tomasz [Laboratory of Environmental Chemometrics, Faculty of Chemistry, University of Gdansk, Gdansk, Poland

    2017-08-09

    A first European Conference on Computational Nanotoxicology, CompNanoTox, was held in November 2015 in Benahavís, Spain with the objectives to disseminate and integrate results from the European modeling and database projects (NanoPUZZLES, ModENPTox, PreNanoTox, MembraneNanoPart, MODERN, eNanoMapper and EU COST TD1204 MODENA) as well as to create synergies within the European NanoSafety Cluster. This conference was supported by the COST Action TD1204 MODENA on developing computational methods for toxicological risk assessment of engineered nanoparticles and provided a unique opportunity for crossfertilization among complementary disciplines. The efforts to develop and validate computational models crucially depend on high quality experimental data and relevant assays which will be the basis to identify relevant descriptors. The ambitious overarching goal of this conference was to promote predictive nanotoxicology, which can only be achieved by a close collaboration between the computational scientists (e.g. database experts, modeling experts for structure, (eco) toxicological effects, performance and interaction of nanomaterials) and experimentalists from different areas (in particular toxicologists, biologists, chemists and material scientists, among others). The main outcome and new perspectives of this conference are summarized here.

  9. Validating computationally predicted TMS stimulation areas using direct electrical stimulation in patients with brain tumors near precentral regions.

    Science.gov (United States)

    Opitz, Alexander; Zafar, Noman; Bockermann, Volker; Rohde, Veit; Paulus, Walter

    2014-01-01

    The spatial extent of transcranial magnetic stimulation (TMS) is of paramount interest for all studies employing this method. It is generally assumed that the induced electric field is the crucial parameter to determine which cortical regions are excited. While it is difficult to directly measure the electric field, one usually relies on computational models to estimate the electric field distribution. Direct electrical stimulation (DES) is a local brain stimulation method generally considered the gold standard to map structure-function relationships in the brain. Its application is typically limited to patients undergoing brain surgery. In this study we compare the computationally predicted stimulation area in TMS with the DES area in six patients with tumors near precentral regions. We combine a motor evoked potential (MEP) mapping experiment for both TMS and DES with realistic individual finite element method (FEM) simulations of the electric field distribution during TMS and DES. On average, stimulation areas in TMS and DES show an overlap of up to 80%, thus validating our computational physiology approach to estimate TMS excitation volumes. Our results can help in understanding the spatial spread of TMS effects and in optimizing stimulation protocols to more specifically target certain cortical regions based on computational modeling.

  10. Effect of computational grid on accurate prediction of a wind turbine rotor using delayed detached-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)

    2017-05-15

    The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.

  11. Predicting Transport of 3,5,6-Trichloro-2-Pyridinol Into Saliva Using a Combination Experimental and Computational Approach

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Jordan Ned; Carver, Zana A.; Weber, Thomas J.; Timchalk, Charles

    2017-04-11

    A combination experimental and computational approach was developed to predict chemical transport into saliva. A serous-acinar chemical transport assay was established to measure chemical transport with non-physiological (standard cell culture medium) and physiological (using surrogate plasma and saliva medium) conditions using 3,5,6-trichloro-2-pyridinol (TCPy) a metabolite of the pesticide chlorpyrifos. High levels of TCPy protein binding was observed in cell culture medium and rat plasma resulting in different TCPy transport behaviors in the two experimental conditions. In the non-physiological transport experiment, TCPy reached equilibrium at equivalent concentrations in apical and basolateral chambers. At higher TCPy doses, increased unbound TCPy was observed, and TCPy concentrations in apical and basolateral chambers reached equilibrium faster than lower doses, suggesting only unbound TCPy is able to cross the cellular monolayer. In the physiological experiment, TCPy transport was slower than non-physiological conditions, and equilibrium was achieved at different concentrations in apical and basolateral chambers at a comparable ratio (0.034) to what was previously measured in rats dosed with TCPy (saliva:blood ratio: 0.049). A cellular transport computational model was developed based on TCPy protein binding kinetics and accurately simulated all transport experiments using different permeability coefficients for the two experimental conditions (1.4 vs 0.4 cm/hr for non-physiological and physiological experiments, respectively). The computational model was integrated into a physiologically based pharmacokinetic (PBPK) model and accurately predicted TCPy concentrations in saliva of rats dosed with TCPy. Overall, this study demonstrates an approach to predict chemical transport in saliva potentially increasing the utility of salivary biomonitoring in the future.

  12. Predicting Transport of 3,5,6-Trichloro-2-Pyridinol Into Saliva Using a Combination Experimental and Computational Approach.

    Science.gov (United States)

    Smith, Jordan Ned; Carver, Zana A; Weber, Thomas J; Timchalk, Charles

    2017-06-01

    A combination experimental and computational approach was developed to predict chemical transport into saliva. A serous-acinar chemical transport assay was established to measure chemical transport with nonphysiological (standard cell culture medium) and physiological (using surrogate plasma and saliva medium) conditions using 3,5,6-trichloro-2-pyridinol (TCPy) a metabolite of the pesticide chlorpyrifos. High levels of TCPy protein binding were observed in cell culture medium and rat plasma resulting in different TCPy transport behaviors in the 2 experimental conditions. In the nonphysiological transport experiment, TCPy reached equilibrium at equivalent concentrations in apical and basolateral chambers. At higher TCPy doses, increased unbound TCPy was observed, and TCPy concentrations in apical and basolateral chambers reached equilibrium faster than lower doses, suggesting only unbound TCPy is able to cross the cellular monolayer. In the physiological experiment, TCPy transport was slower than nonphysiological conditions, and equilibrium was achieved at different concentrations in apical and basolateral chambers at a comparable ratio (0.034) to what was previously measured in rats dosed with TCPy (saliva:blood ratio: 0.049). A cellular transport computational model was developed based on TCPy protein binding kinetics and simulated all transport experiments reasonably well using different permeability coefficients for the 2 experimental conditions (1.14 vs 0.4 cm/h for nonphysiological and physiological experiments, respectively). The computational model was integrated into a physiologically based pharmacokinetic model and accurately predicted TCPy concentrations in saliva of rats dosed with TCPy. Overall, this study demonstrates an approach to predict chemical transport in saliva, potentially increasing the utility of salivary biomonitoring in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights

  13. De novo identification of viral pathogens from cell culture hologenomes

    Directory of Open Access Journals (Sweden)

    Patowary Ashok

    2012-01-01

    Full Text Available Abstract Background Fast, specific identification and surveillance of pathogens is the cornerstone of any outbreak response system, especially in the case of emerging infectious diseases and viral epidemics. This process is generally tedious and time-consuming thus making it ineffective in traditional settings. The added complexity in these situations is the non-availability of pure isolates of pathogens as they are present as mixed genomes or hologenomes. Next-generation sequencing approaches offer an attractive solution in this scenario as it provides adequate depth of sequencing at fast and affordable costs, apart from making it possible to decipher complex interactions between genomes at a scale that was not possible before. The widespread application of next-generation sequencing in this field has been limited by the non-availability of an efficient computational pipeline to systematically analyze data to delineate pathogen genomes from mixed population of genomes or hologenomes. Findings We applied next-generation sequencing on a sample containing mixed population of genomes from an epidemic with appropriate processing and enrichment. The data was analyzed using an extensive computational pipeline involving mapping to reference genome sets and de-novo assembly. In depth analysis of the data generated revealed the presence of sequences corresponding to Japanese encephalitis virus. The genome of the virus was also independently de-novo assembled. The presence of the virus was in addition, verified using standard molecular biology techniques. Conclusions Our approach can accurately identify causative pathogens from cell culture hologenome samples containing mixed population of genomes and in principle can be applied to patient hologenome samples without any background information. This methodology could be widely applied to identify and isolate pathogen genomes and understand their genomic variability during outbreaks.

  14. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  1. COMETHE III J a computer code for predicting mechanical and thermal behaviour of a fuel pin

    International Nuclear Information System (INIS)

    Verbeek, P.; Hoppe, N.

    1976-01-01

    The design of fuel pins for power reactors requires a realistic evaluation of their thermal and mechanical performances throughout their irradiation life. This evaluation involves the knowledge of a number of parameters, very intricate and interconnected, for example, the temperature, the restructuring and the swelling rates of the fuel pellets, the dimensions, the stresses and the strains in the clad, the composition and the properties of gases, the inner gas pressure etc. This complex problem can only be properly handled by a computer programme which analyses the fuel pin thermal and mechanical behaviour at successive steps of its irradiation life. This report presents an overall description of the COMETHE III-J computer programme, designed to calculate the integral performance of oxide fuel pins with cylindrical metallic cladding irradiated in thermal or fast flux. (author)

  2. Computation of Unsteady Flow in Flame Trench For Prediction of Ignition Overpressure Waves

    Science.gov (United States)

    Kwak, Dochan; Kris, Cetin

    2010-01-01

    Computational processes/issues for supporting mission tasks are discussed using an example from launch environment simulation. Entire CFD process has been discussed using an existing code; STS-124 conditions were revisited to support wall repair effort for STS-125 flight; when water bags were not included, computed results indicate that IOP waves with the peak values have been reflected from SRB s own exhaust hole; ARES-1X simulations show that there is a shock wave going through the unused exhaust hole, however, it plays a secondary role; all three ARES-1X cases and STS-1 simulations showed very similar IOP magnitudes and patters on the vehicle; with the addition of water bags and water injection, it will further diminish the IOP effects.

  3. Identification of a novel Plasmopara halstedii elicitor protein combining de novo peptide sequencing algorithms and RACE-PCR

    Directory of Open Access Journals (Sweden)

    Madlung Johannes

    2010-05-01

    Full Text Available Abstract Background Often high-quality MS/MS spectra of tryptic peptides do not match to any database entry because of only partially sequenced genomes and therefore, protein identification requires de novo peptide sequencing. To achieve protein identification of the economically important but still unsequenced plant pathogenic oomycete Plasmopara halstedii, we first evaluated the performance of three different de novo peptide sequencing algorithms applied to a protein digests of standard proteins using a quadrupole TOF (QStar Pulsar i. Results The performance order of the algorithms was PEAKS online > PepNovo > CompNovo. In summary, PEAKS online correctly predicted 45% of measured peptides for a protein test data set. All three de novo peptide sequencing algorithms were used to identify MS/MS spectra of tryptic peptides of an unknown 57 kDa protein of P. halstedii. We found ten de novo sequenced peptides that showed homology to a Phytophthora infestans protein, a closely related organism of P. halstedii. Employing a second complementary approach, verification of peptide prediction and protein identification was performed by creation of degenerate primers for RACE-PCR and led to an ORF of 1,589 bp for a hypothetical phosphoenolpyruvate carboxykinase. Conclusions Our study demonstrated that identification of proteins within minute amounts of sample material improved significantly by combining sensitive LC-MS methods with different de novo peptide sequencing algorithms. In addition, this is the first study that verified protein prediction from MS data by also employing a second complementary approach, in which RACE-PCR led to identification of a novel elicitor protein in P. halstedii.

  4. Application of Soft Computing Tools for Wave Prediction at Specific Locations in the Arabian Sea Using Moored Buoy Observations

    Directory of Open Access Journals (Sweden)

    J. Vimala

    2012-12-01

    Full Text Available The knowledge of design and operational values of significant wave heights is perhaps the single most important input needed in ocean engineering studies. Conventionally such information is obtained using classical statistical analysis and stochastic methods. As the causative variables are innumerable and underlying physics is too complicated, the results obtained from the numerical models may not always be very satisfactory. Soft computing tools like Artificial Neural Network (ANN and Adaptive Network based Fuzzy Inference System (ANFIS may therefore be useful to predict significant wave heights in some situations. The study is aimed at forecasting of significant wave height values in real time over a period of 24hrs at certain locations in Indian seas using the models of ANN and ANFIS. The data for the work were collected by National Institute of Ocean Technology, Chennai. It was found that the predictions of wave heights can be done by both methods with equal efficiency and satisfaction.

  5. An intelligent and secure system for predicting and preventing Zika virus outbreak using Fog computing

    Science.gov (United States)

    Sareen, Sanjay; Gupta, Sunil Kumar; Sood, Sandeep K.

    2017-10-01

    Zika virus is a mosquito-borne disease that spreads very quickly in different parts of the world. In this article, we proposed a system to prevent and control the spread of Zika virus disease using integration of Fog computing, cloud computing, mobile phones and the Internet of things (IoT)-based sensor devices. Fog computing is used as an intermediary layer between the cloud and end users to reduce the latency time and extra communication cost that is usually found high in cloud-based systems. A fuzzy k-nearest neighbour is used to diagnose the possibly infected users, and Google map web service is used to provide the geographic positioning system (GPS)-based risk assessment to prevent the outbreak. It is used to represent each Zika virus (ZikaV)-infected user, mosquito-dense sites and breeding sites on the Google map that help the government healthcare authorities to control such risk-prone areas effectively and efficiently. The proposed system is deployed on Amazon EC2 cloud to evaluate its performance and accuracy using data set for 2 million users. Our system provides high accuracy of 94.5% for initial diagnosis of different users according to their symptoms and appropriate GPS-based risk assessment.

  6. De novo transcriptome assembly of shrimp Palaemon serratus

    Directory of Open Access Journals (Sweden)

    Alejandra Perina

    2017-03-01

    Full Text Available The shrimp Palaemon serratus is a coastal decapod crustacean with a high commercial value. It is harvested for human consumption. In this study, we used Illumina sequencing technology (HiSeq 2000 to sequence, assemble and annotate the transcriptome of P. serratus. RNA was isolated from muscle of adults individuals and, from a pool of larvae. A total number of 4 cDNA libraries were constructed, using the TruSeq RNA Sample Preparation Kit v2. The raw data in this study was deposited in NCBI SRA database with study accession number of SRP090769. The obtained data were subjected to de novo transcriptome assembly using Trinity software, and coding regions were predicted by TransDecoder. We used Blastp and Sma3s to annotate the identified proteins. The transcriptome data could provide some insight into the understanding of genes involved in the larval development and metamorphosis.

  7. De novo assembly of a haplotype-resolved human genome.

    Science.gov (United States)

    Cao, Hongzhi; Wu, Honglong; Luo, Ruibang; Huang, Shujia; Sun, Yuhui; Tong, Xin; Xie, Yinlong; Liu, Binghang; Yang, Hailong; Zheng, Hancheng; Li, Jian; Li, Bo; Wang, Yu; Yang, Fang; Sun, Peng; Liu, Siyang; Gao, Peng; Huang, Haodong; Sun, Jing; Chen, Dan; He, Guangzhu; Huang, Weihua; Huang, Zheng; Li, Yue; Tellier, Laurent C A M; Liu, Xiao; Feng, Qiang; Xu, Xun; Zhang, Xiuqing; Bolund, Lars; Krogh, Anders; Kristiansen, Karsten; Drmanac, Radoje; Drmanac, Snezana; Nielsen, Rasmus; Li, Songgang; Wang, Jian; Yang, Huanming; Li, Yingrui; Wong, Gane Ka-Shu; Wang, Jun

    2015-06-01

    The human genome is diploid, and knowledge of the variants on each chromosome is important for the interpretation of genomic information. Here we report the assembly of a haplotype-resolved diploid genome without using a reference genome. Our pipeline relies on fosmid pooling together with whole-genome shotgun strategies, based solely on next-generation sequencing and hierarchical assembly methods. We applied our sequencing method to the genome of an Asian individual and generated a 5.15-Gb assembled genome with a haplotype N50 of 484 kb. Our analysis identified previously undetected indels and 7.49 Mb of novel coding sequences that could not be aligned to the human reference genome, which include at least six predicted genes. This haplotype-resolved genome represents the most complete de novo human genome assembly to date. Application of our approach to identify individual haplotype differences should aid in translating genotypes to phenotypes for the development of personalized medicine.

  8. Visuo-motor coordination ability predicts performance with brain-computer interfaces controlled by modulation of sensorimotor rhythms (SMR

    Directory of Open Access Journals (Sweden)

    Eva Maria Hammer

    2014-08-01

    Full Text Available Modulation of sensorimotor rhythms (SMR was suggested as a control signal for brain-computer interfaces (BCI. Yet, there is a population of users estimated between 10 to 50% not able to achieve reliable control and only about 20% of users achieve high (80-100% performance. Predicting performance prior to BCI use would facilitate selection of the most feasible system for an individual, thus constitute a practical benefit for the user, and increase our knowledge about the correlates of BCI control. In a recent study, we predicted SMR-BCI performance from psychological variables that were assessed prior to the BCI sessions and BCI control was supported with machine-learning techniques. We described two significant psychological predictors, namely the visuo-motor coordination ability and the ability to concentrate on the task. The purpose of the current study was to replicate these results thereby validating these predictors within a neurofeedback based SMR-BCI that involved no machine learning. Thirty-three healthy BCI novices participated in a calibration session and three further neurofeedback training sessions. Two variables were related with mean SMR-BCI performance: (1 A measure for the accuracy of fine motor skills, i.e. a trade for a person’s visuo-motor control ability and (2 subject’s attentional impulsivity. In a linear regression they accounted for almost 20% in variance of SMR-BCI performance, but predictor (1 failed significance. Nevertheless, on the basis of our prior regression model for sensorimotor control ability we could predict current SMR-BCI performance with an average prediction error of M = 12.07%. In more than 50% of the participants, the prediction error was smaller than 10%. Hence, psychological variables played a moderate role in predicting SMR-BCI performance in a neurofeedback approach that involved no machine learning. Future studies are needed to further consolidate (or reject the present predictors.

  9. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    Science.gov (United States)

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  10. Quantitative computed tomography for the prediction of pulmonary function after lung cancer surgery: a simple method using simulation software.

    Science.gov (United States)

    Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu

    2009-03-01

    The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, plung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.

  11. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Jane

    2011-07-25

    Jul 25, 2011 ... fold recognition and ab initio protein structures, classification of structural motifs and ... stringent cross validation method to evaluate the method's performance ..... Hauser H, Jagels K, Moule S, Mungall K, Norbertczak H,.

  12. Computational and experimental prediction of dust production in pebble bed reactors, Part II

    Energy Technology Data Exchange (ETDEWEB)

    Hiruta, Mie; Johnson, Gannon [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Rostamian, Maziar, E-mail: mrostamian@asme.org [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Potirniche, Gabriel P. [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Ougouag, Abderrafi M. [Idaho National Laboratory, 2525 N Fremont Avenue, Idaho Falls, ID 83401 (United States); Bertino, Massimo; Franzel, Louis [Department of Physics, Virginia Commonwealth University, Richmond, VA 23284 (United States); Tokuhiro, Akira [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States)

    2013-10-15

    Highlights: • Custom-built high temperature, high pressure tribometer is designed. • Two different wear phenomena at high temperatures are observed. • Experimental wear results for graphite are presented. • The graphite wear dust production in a typical Pebble Bed Reactor is predicted. -- Abstract: This paper is the continuation of Part I, which describes the high temperature and high pressure helium environment wear tests of graphite–graphite in frictional contact. In the present work, it has been attempted to simulate a Pebble Bed Reactor core environment as compared to Part I. The experimental apparatus, which is a custom-designed tribometer, is capable of performing wear tests at PBR relevant higher temperatures and pressures under a helium environment. This environment facilitates prediction of wear mass loss of graphite as dust particulates from the pebble bed. The experimental results of high temperature helium environment are used to anticipate the amount of wear mass produced in a pebble bed nuclear reactor.

  13. An experimental and computational investigation of electrical resistivity imaging for prediction ahead of tunnel boring machines

    Science.gov (United States)

    Schaeffer, Kevin P.

    Tunnel boring machines (TBMs) are routinely used for the excavation of tunnels across a range of ground conditions, from hard rock to soft ground. In complex ground conditions and in urban environments, the TBM susceptible to damage due to uncertainty of what lies ahead of the tunnel face. The research presented here explores the application of electrical resistivity theory for use in the TBM tunneling environment to detect changing conditions ahead of the machine. Electrical resistivity offers a real-time and continuous imaging solution to increase the resolution of information along the tunnel alignment and may even unveil previously unknown geologic or man-made features ahead of the TBM. The studies presented herein, break down the tunneling environment and the electrical system to understand how its fundamental parameters can be isolated and tested, identifying how they influence the ability to predict changes ahead of the tunnel face. A proof-of-concept, scaled experimental model was constructed in order assess the ability of the model to predict a metal pipe (or rod) ahead of face as the TBM excavates through a saturated sand. The model shows that a prediction of up to three tunnel diameters could be achieved, but the unique presence of the pipe (or rod) could not be concluded with certainty. Full scale finite element models were developed in order evaluate the various influences on the ability to detect changing conditions ahead of the face. Results show that TBM/tunnel geometry, TBM type, and electrode geometry can drastically influence prediction ahead of the face by tens of meters. In certain conditions (i.e., small TBM diameter, low cover depth, large material contrasts), changes can be detected over 100 meters in front of the TBM. Various electrode arrays were considered and show that in order to better detect more finite differences (e.g., boulder, lens, pipe), the use of individual cutting tools as electrodes is highly advantageous to increase spatial

  14. Spliceman2: a computational web server that predicts defects in pre-mRNA splicing.

    Science.gov (United States)

    Cygan, Kamil Jan; Sanford, Clayton Hendrick; Fairbrother, William Guy

    2017-09-15

    Most pre-mRNA transcripts in eukaryotic cells must undergo splicing to remove introns and join exons, and splicing elements present a large mutational target for disease-causing mutations. Splicing elements are strongly position dependent with respect to the transcript annotations. In 2012, we presented Spliceman, an online tool that used positional dependence to predict how likely distant mutations around annotated splice sites were to disrupt splicing. Here, we present an improved version of the previous tool that will be more useful for predicting the likelihood of splicing mutations. We have added industry-standard input options (i.e. Spliceman now accepts variant call format files), which allow much larger inputs than previously available. The tool also can visualize the locations-within exons and introns-of sequence variants to be analyzed and the predicted effects on splicing of the pre-mRNA transcript. In addition, Spliceman2 integrates with RNAcompete motif libraries to provide a prediction of which trans -acting factors binding sites are disrupted/created and links out to the UCSC genome browser. In summary, the new features in Spliceman2 will allow scientists and physicians to better understand the effects of single nucleotide variations on splicing. Freely available on the web at http://fairbrother.biomed.brown.edu/spliceman2 . Website implemented in PHP framework-Laravel 5, PostgreSQL, Apache, and Perl, with all major browsers supported. william_fairbrother@brown.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. Quantitative structure activity relationship for the computational prediction of nitrocompounds carcinogenicity

    International Nuclear Information System (INIS)

    Morales, Aliuska Helguera; Perez, Miguel Angel Cabrera; Combes, Robert D.; Gonzalez, Maykel Perez

    2006-01-01

    Several nitrocompounds have been screened for carcinogenicity in rodents, but this is a lengthy and expensive process, taking two years and typically costing 2.5 million dollars, and uses large numbers of animals. There is, therefore, much impetus to develop suitable alternative methods. One possible way of predicting carcinogenicity is to use quantitative structure-activity relationships (QSARs). QSARs have been widely utilized for toxicity testing, thereby contributing to a reduction in the need for experimental animals. This paper describes the results of applying a TOPological substructural molecular design (TOPS-MODE) approach for predicting the rodent carcinogenicity of nitrocompounds. The model described 79.10% of the experimental variance, with a standard deviation of 0.424. The predictive power of the model was validated by leave-one-out validation, with a determination coefficient of 0.666. In addition, this approach enabled the contribution of different fragments to carcinogenic potency to be assessed, thereby making the relationships between structure and carcinogenicity to be transparent. It was found that the carcinogenic activity of the chemicals analysed was increased by the presence of a primary amine group bonded to the aromatic ring, a manner that was proportional to the ring aromaticity. The nitro group bonded to an aromatic carbon atom is a more important determinant of carcinogenicity than the nitro group bonded to an aliphatic carbon. Finally, the TOPS-MODE approach was compared with four other predictive models, but none of these could explain more than 66% of the variance in the carcinogenic potency with the same number of variables

  16. Prediction of oral disintegration time of fast disintegrating tablets using texture analyzer and computational optimization.

    Science.gov (United States)

    Szakonyi, G; Zelkó, R

    2013-05-20

    One of the promising approaches to predict in vivo disintegration time of orally disintegrating tablets (ODT) is the use of texture analyzer instrument. Once the method is able to provide good in vitro in vivo correlation (IVIVC) in the case of different tablets, it might be able to predict the oral disintegration time of similar products. However, there are many tablet parameters that influence the in vivo and the in vitro disintegration time of ODT products. Therefore, the measured in vitro and in vivo disintegration times can occasionally differ, even if they coincide in most cases of the investigated products and the in vivo disintegration times may also change if the aimed patient group is suffering from a special illness. If the method is no longer able to provide good IVIVC, then the modification of a single instrumental parameter may not be successful and the in vitro method must be re-set in a complex manner in order to provide satisfactory results. In the present experiment, an optimization process was developed based on texture analysis measurements using five different tablets in order to predict their in vivo disintegration times, and the optimized texture analysis method was evaluated using independent tablets. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. FREC-4A: a computer program to predict fuel rod performance under normal reactor operation

    International Nuclear Information System (INIS)

    Harayama, Yasuo; Izumi, Fumio

    1981-10-01

    The program FREC-4A (Fuel Reliability Evaluation Code-version 4A) is used for predicting fuel rod performance in normal reactor operation. The performance is calculated in accordance with the irradiation history of fuel rods. Emphasis is placed on the prediction of the axial elongation of claddings induced by pellet-cladding mechanical interaction, including the influence of initially preloaded springs inserted in fuel rod lower plenums. In the FREC-4A, an fuel rod is divided into axial segments. In each segment, it is assumed that the temperature, stress and strain are axi-symmetrical, and the axial strain in constant in fuel pellets and in a cladding, though the values in the pellets and in the cladding are different. The calculation of the contact load and the clearance along the length of a fuel rod and the stress and strain in each segment is explained. The method adopted in the FREC-4A is simple, and suitable to predict the deformation of fuel rods over their full length. This report is described on the outline of the program, the method of solving the stiffness equations, the calculation models, the input data such as irradiation history, output distribution, material properties and pores, the printing-out of input data and calculated results. (Kako, I.)

  18. The Prediction of Bandwidth On Need Computer Network Through Artificial Neural Network Method of Backpropagation

    Directory of Open Access Journals (Sweden)

    Ikhthison Mekongga

    2014-02-01

    Full Text Available The need for bandwidth has been increasing recently. This is because the development of internet infrastructure is also increasing so that we need an economic and efficient provider system. This can be achieved through good planning and a proper system. The prediction of the bandwidth consumption is one of the factors that support the planning for an efficient internet service provider system. Bandwidth consumption is predicted using ANN. ANN is an information processing system which has similar characteristics as the biologic al neural network.  ANN  is  chosen  to  predict  the  consumption  of  the  bandwidth  because  ANN  has  good  approachability  to  non-linearity.  The variable used in ANN is the historical load data. A bandwidth consumption information system was built using neural networks  with a backpropagation algorithm to make the use of bandwidth more efficient in the future both in the rental rate of the bandwidth and in the usage of the bandwidth.Keywords: Forecasting, Bandwidth, Backpropagation

  19. Computational prediction of the spectroscopic parameters of methanediol, an elusive molecule for interstellar detection

    Energy Technology Data Exchange (ETDEWEB)

    Barrientos, Carmen; Redondo, Pilar; Largo, Antonio [Departamento de Química Física y Química Inorgánica, Facultad de Ciencias, Universidad de Valladolid, Campus Miguel Delibes, Paseo de Belén 7, E-47011 Valladolid (Spain); Martínez, Henar, E-mail: alargo@qf.uva.es [Departamento de Química Orgánica, Escuela de Ingenierías Industriales, Universidad de Valladolid, Campus Esgueva, Paseo del Cauce 59, E-47011 Valladolid (Spain)

    2014-04-01

    The molecular structure of methanediol has been investigated by means of quantum chemical calculations. Two conformers, corresponding to C{sub 2} and C {sub s} symmetries, respectively, were considered. The C{sub 2} conformer is found to lie about 1.7 (at 298 K) or 2.3 (at 0 K) kcal mol{sup –1} below the C {sub s} conformer. Predictions for their rotational constants, vibrational frequencies, IR intensities, and dipole moments have been provided. The lowest-lying isomer has a very low dipole moment, around 0.03 D, whereas the C {sub s} conformer has a relatively high dipole moment, namely, 2.7 D. The barrier for the C {sub s} →C{sub 2} process is predicted to be around 0.7-1 kcal mol{sup –1}. Based on the energetic results the proportion of the C{sub s} conformer is likely to be negligible under low temperature conditions, such as in the interstellar medium. Therefore, it is predicted that detection by radioastronomy of methanediol would be rather unlikely.

  20. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  1. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  3. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  4. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  7. Prediction of effectiveness of shunting in patients with normal pressure hydrocephalus by cerebral blood flow measurement and computed tomography cisternography

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chia-Cheng; Kuwana, Nobumasa; Ito, Susumu; Ikegami, Tadashi [Yokohama Minami Kyosai Hospital (Japan)

    1999-11-01

    Measurement of cerebral blood flow (CBF) and computed tomography (CT) cisternography were performed in 37 patients with a tentative diagnosis of normal pressure hydrocephalus (NPH) to predict their surgical outcome. The mean CBF of the whole brain was measured quantitatively by single photon emission computed tomography with technetium-99m-hexamethylpropylene amine oxime before surgery. The results of CT cisternography were classified into four patients: type I, no ventricular stasis at 24 hours; type II, no ventricular stasis with delayed clearance of cerebral blush; type III, persistent ventricular stasis with prominent cerebral blush; type IV, persistent ventricular stasis with diminished cerebral blush and/or asymmetrical filling of the sylvian fissures. The mean CBF was significantly lower than that of age-matched controls (p<0.005). Patients with a favorable outcome had a significantly higher mean CBF than patients with an unfavorable outcome (p<0.005). Patients with the type I pattern did not respond to shunting. Some patients with type II and III patterns responded to shunting but improvement was unsatisfactory. Patients with type IV pattern responded well to shunting, and those with a mean CBF of 35 ml/100 g/min or over achieved a favorable outcome. The combination of CBF measurement and CT cisternography can improve the prediction of surgical outcome in patients with suspected NPH. (author)

  8. Prediction of effectiveness of shunting in patients with normal pressure hydrocephalus by cerebral blood flow measurement and computed tomography cisternography

    International Nuclear Information System (INIS)

    Chang, Chia-Cheng; Kuwana, Nobumasa; Ito, Susumu; Ikegami, Tadashi

    1999-01-01

    Measurement of cerebral blood flow (CBF) and computed tomography (CT) cisternography were performed in 37 patients with a tentative diagnosis of normal pressure hydrocephalus (NPH) to predict their surgical outcome. The mean CBF of the whole brain was measured quantitatively by single photon emission computed tomography with technetium-99m-hexamethylpropylene amine oxime before surgery. The results of CT cisternography were classified into four patients: type I, no ventricular stasis at 24 hours; type II, no ventricular stasis with delayed clearance of cerebral blush; type III, persistent ventricular stasis with prominent cerebral blush; type IV, persistent ventricular stasis with diminished cerebral blush and/or asymmetrical filling of the sylvian fissures. The mean CBF was significantly lower than that of age-matched controls (p<0.005). Patients with a favorable outcome had a significantly higher mean CBF than patients with an unfavorable outcome (p<0.005). Patients with the type I pattern did not respond to shunting. Some patients with type II and III patterns responded to shunting but improvement was unsatisfactory. Patients with type IV pattern responded well to shunting, and those with a mean CBF of 35 ml/100 g/min or over achieved a favorable outcome. The combination of CBF measurement and CT cisternography can improve the prediction of surgical outcome in patients with suspected NPH. (author)

  9. Development of computer program ENMASK for prediction of residual environmental masking-noise spectra, from any three independent environmental parameters

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.-S.; Liebich, R. E.; Chun, K. C.

    2000-03-31

    Residual environmental sound can mask intrusive4 (unwanted) sound. It is a factor that can affect noise impacts and must be considered both in noise-impact studies and in noise-mitigation designs. Models for quantitative prediction of sensation level (audibility) and psychological effects of intrusive noise require an input with 1/3 octave-band spectral resolution of environmental masking noise. However, the majority of published residual environmental masking-noise data are given with either octave-band frequency resolution or only single A-weighted decibel values. A model has been developed that enables estimation of 1/3 octave-band residual environmental masking-noise spectra and relates certain environmental parameters to A-weighted sound level. This model provides a correlation among three environmental conditions: measured residual A-weighted sound-pressure level, proximity to a major roadway, and population density. Cited field-study data were used to compute the most probable 1/3 octave-band sound-pressure spectrum corresponding to any selected one of these three inputs. In turn, such spectra can be used as an input to models for prediction of noise impacts. This paper discusses specific algorithms included in the newly developed computer program ENMASK. In addition, the relative audibility of the environmental masking-noise spectra at different A-weighted sound levels is discussed, which is determined by using the methodology of program ENAUDIBL.

  10. Computational Analysis of Epidermal Growth Factor Receptor Mutations Predicts Differential Drug Sensitivity Profiles toward Kinase Inhibitors.

    Science.gov (United States)

    Akula, Sravani; Kamasani, Swapna; Sivan, Sree Kanth; Manga, Vijjulatha; Vudem, Dashavantha Reddy; Kancha, Rama Krishna

    2018-05-01

    A significant proportion of patients with lung cancer carry mutations in the EGFR kinase domain. The presence of a deletion mutation in exon 19 or L858R point mutation in the EGFR kinase domain has been shown to cause enhanced efficacy of inhibitor treatment in patients with NSCLC. Several less frequent (uncommon) mutations in the EGFR kinase domain with potential implications in treatment response have also been reported. The role of a limited number of uncommon mutations in drug sensitivity was experimentally verified. However, a huge number of these mutations remain uncharacterized for inhibitor sensitivity or resistance. A large-scale computational analysis of clinically reported 298 point mutants of EGFR kinase domain has been performed, and drug sensitivity profiles for each mutant toward seven kinase inhibitors has been determined by molecular docking. In addition, the relative inhibitor binding affinity toward each drug as compared with that of adenosine triphosphate was calculated for each mutant. The inhibitor sensitivity profiles predicted in this study for a set of previously characterized mutants correlated well with the published clinical, experimental, and computational data. Both the single and compound mutations displayed differential inhibitor sensitivity toward first- and next-generation kinase inhibitors. The present study provides predicted drug sensitivity profiles for a large panel of uncommon EGFR mutations toward multiple inhibitors, which may help clinicians in deciding mutant-specific treatment strategies. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  11. Progress Toward Analytic Predictions of Supersonic Hydrocarbon-Air Combustion: Computation of Ignition Times and Supersonic Mixing Layers

    Science.gov (United States)

    Sexton, Scott Michael

    Combustion in scramjet engines is faced with the limitation of brief residence time in the combustion chamber, requiring fuel and preheated air streams to mix and ignite in a matter of milliseconds. Accurate predictions of autoignition times are needed to design reliable supersonic combustion chambers. Most efforts in estimating non-premixed autoignition times have been devoted to hydrogen-air mixtures. The present work addresses hydrocarbon-air combustion, which is of interest for future scramjet engines. Computation of ignition in supersonic flows requires adequate characterization of ignition chemistry and description of the flow, both of which are derived in this work. In particular, we have shown that activation energy asymptotics combined with a previously derived reduced chemical kinetic mechanism provides analytic predictions of autoignition times in homogeneous systems. Results are compared with data from shock tube experiments, and previous expressions which employ a fuel depletion criterion. Ignition in scramjet engines has a strong dependence on temperature, which is found by perturbing the chemically frozen mixing layer solution. The frozen solution is obtained here, accounting for effects of viscous dissipation between the fuel and air streams. We investigate variations of thermodynamic and transport properties, and compare these to simplified mixing layers which neglect these variations. Numerically integrating the mixing layer problem reveals a nonmonotonic temperature profile, with a peak occurring inside the shear layer for sufficiently high Mach numbers. These results will be essential in computation of ignition distances in supersonic combustion chambers.

  12. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    International Nuclear Information System (INIS)

    Mimouni, S.; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-01-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune_CFD code. • The model has been validated against 150 tests. • Neptune_CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  13. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.fr; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-04-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune-CFD code. • The model has been validated against 150 tests. • Neptune-CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  14. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  15. Computational Methods to Predict the Regioselectivity of Electrophilic Aromatic Substitution Reactions of Heteroaromatic Systems

    DEFF Research Database (Denmark)

    Kruszyk, Monika; Jessing, Mikkel; Kristensen, Jesper L

    2016-01-01

    The validity of calculated NMR shifts to predict the outcome of electrophilic aromatic substitution reactions on different heterocyclic compounds has been examined. Based on an analysis of >130 literature examples it was found that the lowest calculated 13C and/or 1H chemical shift of a heterocycle...... correlates qualitatively with the regiochemical outcome of halogenation reactions in >80% of the investigated cases. In the remaining cases, the site of electrophilic aromatic substitution can be explained by the calculated HOMO orbitals obtained using density functional theory. Using a combination...

  16. A prognostic scoring model for survival after locoregional therapy in de novo stage IV breast cancer.

    Science.gov (United States)

    Kommalapati, Anuhya; Tella, Sri Harsha; Goyal, Gaurav; Ganti, Apar Kishor; Krishnamurthy, Jairam; Tandra, Pavan Kumar

    2018-05-02

    The role of locoregional treatment (LRT) remains controversial in de novo stage IV breast cancer (BC). We sought to analyze the role of LRT and prognostic factors of overall survival (OS) in de novo stage IV BC patients treated with LRT utilizing the National Cancer Data Base (NCDB). The objective of the current study is to create and internally validate a prognostic scoring model to predict the long-term OS for de novo stage IV BC patients treated with LRT. We included de novo stage IV BC patients reported to NCDB between 2004 and 2015. Patients were divided into LRT and no-LRT subsets. We randomized LRT subset to training and validation cohorts. In the training cohort, a seventeen-point prognostic scoring system was developed based on the hazard ratios calculated using Cox-proportional method. We stratified both training and validation cohorts into two "groups" [group 1 (0-7 points) and group 2 (7-17 points)]. Kaplan-Meier method and log-rank test were used to compare OS between the two groups. Our prognostic score was validated internally by comparing the OS between the respective groups in both the training and validation cohorts. Among 67,978 patients, LRT subset (21,200) had better median OS as compared to that of no-LRT (45 vs. 24 months; p < 0.0001). The group 1 and group 2 in the training cohort showed a significant difference in the 3-year OS (p < 0.0001) (68 vs. 26%). On internal validation, comparable OS was seen between the respective groups in each cohort (p = 0.77). Our prognostic scoring system will help oncologists to predict the prognosis in de novo stage IV BC patients treated with LRT. Although firm treatment-related conclusions cannot be made due to the retrospective nature of the study, LRT appears to be associated with a better OS in specific subgroups.

  17. Automated de novo phasing and model building of coiled-coil proteins.

    Science.gov (United States)

    Rämisch, Sebastian; Lizatović, Robert; André, Ingemar

    2015-03-01

    Models generated by de novo structure prediction can be very useful starting points for molecular replacement for systems where suitable structural homologues cannot be readily identified. Protein-protein complexes and de novo-designed proteins are examples of systems that can be challenging to phase. In this study, the potential of de novo models of protein complexes for use as starting points for molecular replacement is investigated. The approach is demonstrated using homomeric coiled-coil proteins, which are excellent model systems for oligomeric systems. Despite the stereotypical fold of coiled coils, initial phase estimation can be difficult and many structures have to be solved with experimental phasing. A method was developed for automatic structure determination of homomeric coiled coils from X-ray diffraction data. In a benchmark set of 24 coiled coils, ranging from dimers to pentamers with resolutions down to 2.5 Å, 22 systems were automatically solved, 11 of which had previously been solved by experimental phasing. The generated models contained 71-103% of the residues present in the deposited structures, had the correct sequence and had free R values that deviated on average by 0.01 from those of the respective reference structures. The electron-density maps were of sufficient quality that only minor manual editing was necessary to produce final structures. The method, named CCsolve, combines methods for de novo structure prediction, initial phase estimation and automated model building into one pipeline. CCsolve is robust against errors in the initial models and can readily be modified to make use of alternative crystallographic software. The results demonstrate the feasibility of de novo phasing of protein-protein complexes, an approach that could also be employed for other small systems beyond coiled coils.

  18. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  19. Ice–ocean coupled computations for sea-ice prediction to support ice navigation in Arctic sea routes

    Directory of Open Access Journals (Sweden)

    Liyanarachchi Waruna Arampath De Silva

    2015-11-01

    Full Text Available With the recent rapid decrease in summer sea ice in the Arctic Ocean extending the navigation period in the Arctic sea routes (ASR, the precise prediction of ice distribution is crucial for safe and efficient navigation in the Arctic Ocean. In general, however, most of the available numerical models have exhibited significant uncertainties in short-term and narrow-area predictions, especially in marginal ice zones such as the ASR. In this study, we predict short-term sea-ice conditions in the ASR by using a mesoscale eddy-resolving ice–ocean coupled model that explicitly treats ice floe collisions in marginal ice zones. First, numerical issues associated with collision rheology in the ice–ocean coupled model (ice–Princeton Ocean Model [POM] are discussed and resolved. A model for the whole of the Arctic Ocean with a coarser resolution (about 25 km was developed to investigate the performance of the ice–POM model by examining the reproducibility of seasonal and interannual sea-ice variability. It was found that this coarser resolution model can reproduce seasonal and interannual sea-ice variations compared to observations, but it cannot be used to predict variations over the short-term, such as one to two weeks. Therefore, second, high-resolution (about 2.5 km regional models were set up along the ASR to investigate the accuracy of short-term sea-ice predictions. High-resolution computations were able to reasonably reproduce the sea-ice extent compared to Advanced Microwave Scanning Radiometer–Earth Observing System satellite observations because of the improved expression of the ice–albedo feedback process and the ice–eddy interaction process.

  20. Validating computational predictions of night-time ventilation in Stanford's Y2E2 building

    Science.gov (United States)

    Chen, Chen; Lamberti, Giacomo; Gorle, Catherine

    2017-11-01

    Natural ventilation can significantly reduce building energy consumption, but robust design is a challenging task. We previously presented predictions of natural ventilation performance in Stanford's Y2E2 building using two models with different levels of fidelity, embedded in an uncertainty quantification framework to identify the dominant uncertain parameters and predict quantified confidence intervals. The results showed a slightly high cooling rate for the volume-averaged temperature, and the initial thermal mass temperature and window discharge coefficients were found to have an important influence on the results. To further investigate the potential role of these parameters on the observed discrepancies, the current study is performing additional measurements in the Y2E2 building. Wall temperatures are recorded throughout the nightflush using thermocouples; flow rates through windows are measured using hotwires; and spatial variability in the air temperature is explored. The measured wall temperatures are found the be within the range of our model assumptions, and the measured velocities agree reasonably well with our CFD predications. Considerable local variations in the indoor air temperature have been recorded, largely explaining the discrepancies in our earlier validation study. Future work will therefore focus on a local validation of the CFD results with the measurements. Center for Integrated Facility Engineering (CIFE).

  1. Predictive multiscale computational model of shoe-floor coefficient of friction.

    Science.gov (United States)

    Moghaddam, Seyed Reza M; Acharya, Arjun; Redfern, Mark S; Beschorner, Kurt E

    2018-01-03

    Understanding the frictional interactions between the shoe and floor during walking is critical to prevention of slips and falls, particularly when contaminants are present. A multiscale finite element model of shoe-floor-contaminant friction was developed that takes into account the surface and material characteristics of the shoe and flooring in microscopic and macroscopic scales. The model calculates shoe-floor coefficient of friction (COF) in boundary lubrication regime where effects of adhesion friction and hydrodynamic pressures are negligible. The validity of model outputs was assessed by comparing model predictions to the experimental results from mechanical COF testing. The multiscale model estimates were linearly related to the experimental results (p < 0.0001). The model predicted 73% of variability in experimentally-measured shoe-floor-contaminant COF. The results demonstrate the potential of multiscale finite element modeling in aiding slip-resistant shoe and flooring design and reducing slip and fall injuries. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Structure Prediction of Outer Membrane Protease Protein of Salmonella typhimurium Using Computational Techniques

    Directory of Open Access Journals (Sweden)

    Rozina Tabassum

    2016-03-01

    Full Text Available Salmonella typhimurium, a facultative gram-negative intracellular pathogen belonging to family Enterobacteriaceae, is the most frequent cause of human gastroenteritis worldwide. PgtE gene product, outer membrane protease emerges important in the intracellular phases of salmonellosis. The pgtE gene product of S. typhimurium was predicted to be capable of proteolyzing T7 RNA polymerase and localize in the outer membrane of these gram negative bacteria. PgtE product of S. enterica and OmpT of E. coli, having high sequence similarity have been revealed to degrade macrophages, causing salmonellosis and other diseases. The three-dimensional structure of the protein was not available through Protein Data Bank (PDB creating lack of structural information about E protein. In our study, by performing Comparative model building, the three dimensional structure of outer membrane protease protein was generated using the backbone of the crystal structure of Pla of Yersinia pestis, retrieved from PDB, with MODELLER (9v8. Quality of the model was assessed by validation tool PROCHECK, web servers like ERRAT and ProSA are used to certify the reliability of the predicted model. This information might offer clues for better understanding of E protein and consequently for developmet of better therapeutic treatment against pathogenic role of this protein in salmonellosis and other diseases.

  3. A computational model to predict rat ovarian steroid secretion from in vitro experiments with endocrine disruptors.

    Directory of Open Access Journals (Sweden)

    Nadia Quignot

    Full Text Available A finely tuned balance between estrogens and androgens controls reproductive functions, and the last step of steroidogenesis plays a key role in maintaining that balance. Environmental toxicants are a serious health concern, and numerous studies have been devoted to studying the effects of endocrine disrupting chemicals (EDCs. The effects of EDCs on steroidogenic enzymes may influence steroid secretion and thus lead to reproductive toxicity. To predict hormonal balance disruption on the basis of data on aromatase activity and mRNA level modulation obtained in vitro on granulosa cells, we developed a mathematical model for the last gonadal steps of the sex steroid synthesis pathway. The model can simulate the ovarian synthesis and secretion of estrone, estradiol, androstenedione, and testosterone, and their response to endocrine disruption. The model is able to predict ovarian sex steroid concentrations under normal estrous cycle in female rat, and ovarian estradiol concentrations in adult female rats exposed to atrazine, bisphenol A, metabolites of methoxychlor or vinclozolin, and letrozole.

  4. An Automatic Prediction of Epileptic Seizures Using Cloud Computing and Wireless Sensor Networks.

    Science.gov (United States)

    Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar

    2016-11-01

    Epilepsy is one of the most common neurological disorders which is characterized by the spontaneous and unforeseeable occurrence of seizures. An automatic prediction of seizure can protect the patients from accidents and save their life. In this article, we proposed a mobile-based framework that automatically predict seizures using the information contained in electroencephalography (EEG) signals. The wireless sensor technology is used to capture the EEG signals of patients. The cloud-based services are used to collect and analyze the EEG data from the patient's mobile phone. The features from the EEG signal are extracted using the fast Walsh-Hadamard transform (FWHT). The Higher Order Spectral Analysis (HOSA) is applied to FWHT coefficients in order to select the features set relevant to normal, preictal and ictal states of seizure. We subsequently exploit the selected features as input to a k-means classifier to detect epileptic seizure states in a reasonable time. The performance of the proposed model is tested on Amazon EC2 cloud and compared in terms of execution time and accuracy. The findings show that with selected HOS based features, we were able to achieve a classification accuracy of 94.6 %.

  5. Using X-ray computed tomography to predict carcass leanness in pigs

    International Nuclear Information System (INIS)

    Horn, P.; Kover, Gy.; Paszthy, Gy.; Berenyi, E.; Repa, I; Kovacs, G.

    1996-01-01

    Just one hundred years ago. Wilhelm Conrad Rontgen published his paper in Wurzburg describing X-rays and their effects, earning him the first Nobel Prize in Physics, presented in 1901. X-ray based diagnostic equipment revolutionized human diagnostics. A new milestone in the development of radiology was when Godfrey Hotmufield proposed to get additional anatomical information from a cross-sectional plane of the body by computer aided mathematical synthesis of an image from X-ray transmission data obtained from many different angles through the plane in consideration. The idea of the X-ray Computer Aided Tomography (CAT) was born, and the dramatic development of X-ray CAT imaging technology began, enhancing efficiency and scope of human diagnostics in human medicine. The numbers used to characterize the X-ray absorption in each picture element (pixel) of the CAT image are called ''CT'' or Hounsfield numbers (Hounsfield, 1979). The Nobel Prize was awarded to Hounsfield in 1979. X-ray Computer Aided Tomography system and software used were developed for humen medical purposes - mainly to detect anatomical physiological disorders - by all leading high-tech manufacturers in the world. As early as 1980, the potential of the new technique to be utilized in animal science was first recognized in Norway (Skjervold et al., 1981). In 1982, the Agricultural University of Norway acquired a Siemens Somatom 2 CAT system, and pioneering work started to apply X-ray CAT techniques in animal science. Based on the promising results obtained and published by the ''Norwegian school'' (Sehested, 1 984 Vangen, 1984 Allen and Vangen, 1984: Vangen and Standal, 1984), a research project proposal was prepared for the Hungarian Ministry of Agriculture and Food and the World Bank in 1985 to set up a digital imaging center at our Institute. The project was approved in 1986 (Horn, 1991a). The new digital imaging and diagnostic center started its operation in 1990, equipped first with a Siemens

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  7. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  8. Enzyme-like replication de novo in a microcontroller environment.

    Science.gov (United States)

    Tangen, Uwe

    2010-01-01

    The desire to start evolution from scratch inside a computer memory is as old as computing. Here we demonstrate how viable computer programs can be established de novo in a Precambrian environment without supplying any specific instantiation, just starting with random bit sequences. These programs are not self-replicators, but act much more like catalysts. The microcontrollers used in the end are the result of a long series of simplifications. The objective of this simplification process was to produce universal machines with a human-readable interface, allowing software and/or hardware evolution to be studied. The power of the instruction set can be modified by introducing a secondary structure-folding mechanism, which is a state machine, allowing nontrivial replication to emerge with an instruction width of only a few bits. This state-machine approach not only attenuates the problems of brittleness and encoding functionality (too few bits available for coding, and too many instructions needed); it also enables the study of hardware evolution as such. Furthermore, the instruction set is sufficiently powerful to permit external signals to be processed. This information-theoretic approach forms one vertex of a triangle alongside artificial cell research and experimental research on the creation of life. Hopefully this work helps develop an understanding of how information—in a similar sense to the account of functional information described by Hazen et al.—is created by evolution and how this information interacts with or is embedded in its physico-chemical environment.

  9. Predicting Short-Term Electricity Demand by Combining the Advantages of ARMA and XGBoost in Fog Computing Environment

    Directory of Open Access Journals (Sweden)

    Chuanbin Li

    2018-01-01

    Full Text Available With the rapid development of IoT, the disadvantages of Cloud framework have been exposed, such as high latency, network congestion, and low reliability. Therefore, the Fog Computing framework has emerged, with an extended Fog Layer between the Cloud and terminals. In order to address the real-time prediction on electricity demand, we propose an approach based on XGBoost and ARMA in Fog Computing environment. By taking the advantages of Fog Computing framework, we first propose a prototype-based clustering algorithm to divide enterprise users into several categories based on their total electricity consumption; we then propose a model selection approach by analyzing users’ historical records of electricity consumption and identifying the most important features. Generally speaking, if the historical records pass the test of stationarity and white noise, ARMA is used to model the user’s electricity consumption in time sequence; otherwise, if the historical records do not pass the test, and some discrete features are the most important, such as weather and whether it is weekend, XGBoost will be used. The experiment results show that our proposed approach by combining the advantage of ARMA and XGBoost is more accurate than the classical models.

  10. Development of an international matrix-solver prediction system on a French-Japanese international grid computing environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kushida, Noriyuki; Tatekawa, Takayuki; Teshima, Naoya; Caniou, Yves; Guivarch, Ronan; Dayde, Michel; Ramet, Pierre

    2010-01-01

    The 'Research and Development of International Matrix-Solver Prediction System (REDIMPS)' project aimed at improving the TLSE sparse linear algebra expert website by establishing an international grid computing environment between Japan and France. To help users in identifying the best solver or sparse linear algebra tool for their problems, we have developed an interoperable environment between French and Japanese grid infrastructures (respectively managed by DIET and AEGIS). Two main issues were considered. The first issue is how to submit a job from DIET to AEGIS. The second issue is how to bridge the difference of security between DIET and AEGIS. To overcome these issues, we developed APIs to communicate between different grid infrastructures by improving the client API of AEGIS. By developing a server deamon program (SeD) of DIET which behaves like an AEGIS user, DIET can call functions in AEGIS: authentication, file transfer, job submission, and so on. To intensify the security, we also developed functionalities to authenticate DIET sites and DIET users in order to access AEGIS computing resources. By this study, the set of software and computers available within TLSE to find an appropriate solver is enlarged over France (DIET) and Japan (AEGIS). (author)

  11. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  12. Benchmark studies of computer prediction techniques for equilibrium chemistry and radionuclide transport in groundwater flow

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1988-01-01

    A brief review of two recent benchmark exercises is presented. These were separately concerned with the equilibrium chemistry of groundwater and the geosphere migration of radionuclides, and involved the use of a total of 19 computer codes by 11 organisations in Europe and Canada. A similar methodology was followed for each exercise, in that series of hypothetical test cases were used to explore the limits of each code's application, and so provide an overview of current modelling potential. Aspects of the user-friendliness of individual codes were also considered. The benchmark studies have benefited participating organisations by providing a means of verifying current codes, and have provided problem data sets by which future models may be compared. (author)

  13. Optimal dose reduction in computed tomography methodologies predicted from real-time dosimetry

    Science.gov (United States)

    Tien, Christopher Jason

    Over the past two decades, computed tomography (CT) has become an increasingly common and useful medical imaging technique. CT is a noninvasive imaging modality with three-dimensional volumetric viewing abilities, all in sub-millimeter resolution. Recent national scrutiny on radiation dose from medical exams has spearheaded an initiative to reduce dose in CT. This work concentrates on dose reduction of individual exams through two recently-innovated dose reduction techniques: organ dose modulation (ODM) and tube current modulation (TCM). ODM and TCM tailor the phase and amplitude of x-ray current, respectively, used by the CT scanner during the scan. These techniques are unique because they can be used to achieve patient dose reduction without any appreciable loss in image quality. This work details the development of the tools and methods featuring real-time dosimetry which were used to provide pioneering measurements of ODM or TCM in dose reduction for CT.

  14. The use of computational thermodynamics to predict properties of multicomponent materials for nuclear applications

    International Nuclear Information System (INIS)

    Sundman, B.; Gueneau, C.

    2013-01-01

    Computational Thermodynamics is based on physically realistic models to describe metallic and oxide crystalline phases as well as the liquid and gas in a consistent manner. The models are used to assess experimental and theoretical data for many different materials and several thermodynamic databases has been developed for steels, ceramics, semiconductor materials as well as materials for nuclear applications. Within CEA a long term work is ongoing to develop a database for the properties of nuclear fuels and structural materials. An overview of the modelling technique will be given and several examples of the application of the database to different problems, both for traditional phase diagram calculations and its use in simulating phase transformations. The following diagrams (Fig. 1, Fig. 2 and Fig.3) show calculations in the U-Pu-O system. (authors)

  15. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  16. Predicting the Noise of High Power Fluid Targets Using Computational Fluid Dynamics

    Science.gov (United States)

    Moore, Michael; Covrig Dusa, Silviu

    The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target satisfied the design goals of bench-marked with the Qweak target data. This work is an essential component in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).

  17. A computer code PACTOLE to predict activation and transport of corrosion products in a PWR

    International Nuclear Information System (INIS)

    Beslu, P.; Frejaville, G.; Lalet, A.

    1978-01-01

    Theoretical studies on activation and transport of corrosion products in a PWR primary circuit have been concentrated, at CEA on the development of a computer code : PACTOLE. This code takes into account the major phenomena which govern corrosion products transport: 1. Ion solubility is obtained by usual thermodynamics laws in function of water chemistry: pH at operating temperature is calculated by the code. 2. Release rates of base metals, dissolution rates of deposits, precipitation rates of soluble products are derived from solubility variations. 3. Deposition of solid particles is treated by a model taking into account particle size, brownian and turbulent diffusion and inertial effect. Erosion of deposits is accounted for by a semi-empirical model. After a review of calculational models, an application of PACTOLE is presented in view of analyzing the distribution of in core. (author)

  18. Evaluation of MOSTAS