WorldWideScience

Sample records for enable accurate prediction

  1. Surface temperatures in New York City: Geospatial data enables the accurate prediction of radiative heat transfer.

    Science.gov (United States)

    Ghandehari, Masoud; Emig, Thorsten; Aghamohamadnia, Milad

    2018-02-02

    Despite decades of research seeking to derive the urban energy budget, the dynamics of thermal exchange in the densely constructed environment is not yet well understood. Using New York City as a study site, we present a novel hybrid experimental-computational approach for a better understanding of the radiative heat transfer in complex urban environments. The aim of this work is to contribute to the calculation of the urban energy budget, particularly the stored energy. We will focus our attention on surface thermal radiation. Improved understanding of urban thermodynamics incorporating the interaction of various bodies, particularly in high rise cities, will have implications on energy conservation at the building scale, and for human health and comfort at the urban scale. The platform presented is based on longwave hyperspectral imaging of nearly 100 blocks of Manhattan, in addition to a geospatial radiosity model that describes the collective radiative heat exchange between multiple buildings. Despite assumptions in surface emissivity and thermal conductivity of buildings walls, the close comparison of temperatures derived from measurements and computations is promising. Results imply that the presented geospatial thermodynamic model of urban structures can enable accurate and high resolution analysis of instantaneous urban surface temperatures.

  2. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    Science.gov (United States)

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  3. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  4. Hydrologic Prediction Through Earthcube Enabled Hydrogeophysical Cyberinfrastructure

    Science.gov (United States)

    Versteeg, R. J.; Johnson, D.

    2012-12-01

    Accurate prediction of hydrologic processes is contingent on the successful interaction of multiple components, including (1) accurate conceptual and numerical models describing physical, chemical and biological processes (2) a numerical framework for integration of such processes and (3) multidisciplinary temporal data streams which feeds such models. Over the past ten years the main focus in the hydrogeophysical community has been the advancement and developments of conceptual and numerical models. While this advancement still poses numerous challenges (e.g. the in silico modeling of microbiological processes and the coupling of models across different interfaces) there is now a fairly good high level understanding of the types, scales of and interplay between processes. In parallel with this advancement there have been rapid developments in data acquisition capabilities (ranging from satellite based remote sensing to low cost sensor networks) and the associated cyberinfrastructure which allows for mash ups of data from heterogeneous and independent sensor networks. The tools for this in generally have come from outside the hydrogeophysical community - partly these are specific scientific tools developed through NSF, DOE and NASA funding, and partly these are general web2.0 tools or tools developed under commercial initiatives (e.g. the IBM Smarter Planet initiative). One challenge facing the hydrogeophysical community is how to effectively harness all these tools to develop hydrologic prediction tools. One of the primary opportunities for this is the NSF funded EarthCube effort (http://earthcube.ning.com/ ). The goal of EarthCube is to transform the conduct of research by supporting the development of community-guided cyberinfrastructure to integrate data and information for knowledge management across the Geosciences. Note that Earthcube is part of a larger NSF effort (Cyberinfrastructure for the 21st Century (CIF21), and that Earthcube is driven by the vision

  5. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  6. Functional neuroimaging of visuospatial working memory tasks enables accurate detection of attention deficit and hyperactivity disorder

    Directory of Open Access Journals (Sweden)

    Rubi Hammer

    2015-01-01

    Full Text Available Finding neurobiological markers for neurodevelopmental disorders, such as attention deficit and hyperactivity disorder (ADHD, is a major objective of clinicians and neuroscientists. We examined if functional Magnetic Resonance Imaging (fMRI data from a few distinct visuospatial working memory (VSWM tasks enables accurately detecting cases with ADHD. We tested 20 boys with ADHD combined type and 20 typically developed (TD boys in four VSWM tasks that differed in feedback availability (feedback, no-feedback and reward size (large, small. We used a multimodal analysis based on brain activity in 16 regions of interest, significantly activated or deactivated in the four VSWM tasks (based on the entire participants' sample. Dimensionality of the data was reduced into 10 principal components that were used as the input variables to a logistic regression classifier. fMRI data from the four VSWM tasks enabled a classification accuracy of 92.5%, with high predicted ADHD probability values for most clinical cases, and low predicted ADHD probabilities for most TDs. This accuracy level was higher than those achieved by using the fMRI data of any single task, or the respective behavioral data. This indicates that task-based fMRI data acquired while participants perform a few distinct VSWM tasks enables improved detection of clinical cases.

  7. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  8. Accurate torque-speed performance prediction for brushless dc motors

    Science.gov (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  9. Adaptive through-thickness integration for accurate springback prediction

    NARCIS (Netherlands)

    Burchitz, I.A.; Meinders, Vincent T.

    2007-01-01

    Accurate numerical prediction of springback in sheet metal forming is essential for the automotive industry. Numerous factors influence the accuracy of prediction of this complex phenomenon by using the finite element method. One of them is the numerical integration through the thickness of shell

  10. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  11. Consistent Multigroup Theory Enabling Accurate Course-Group Simulation of Gen IV Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Rahnema, Farzad; Haghighat, Alireza; Ougouag, Abderrafi

    2013-11-29

    The objective of this proposal is the development of a consistent multi-group theory that accurately accounts for the energy-angle coupling associated with collapsed-group cross sections. This will allow for coarse-group transport and diffusion theory calculations that exhibit continuous energy accuracy and implicitly treat cross- section resonances. This is of particular importance when considering the highly heterogeneous and optically thin reactor designs within the Next Generation Nuclear Plant (NGNP) framework. In such reactors, ignoring the influence of anisotropy in the angular flux on the collapsed cross section, especially at the interface between core and reflector near which control rods are located, results in inaccurate estimates of the rod worth, a serious safety concern. The scope of this project will include the development and verification of a new multi-group theory enabling high-fidelity transport and diffusion calculations in coarse groups, as well as a methodology for the implementation of this method in existing codes. This will allow for a higher accuracy solution of reactor problems while using fewer groups and will reduce the computational expense. The proposed research represents a fundamental advancement in the understanding and improvement of multi- group theory for reactor analysis.

  12. Acoustic Effects Accurately Predict an Extreme Case of Biological Morphology

    Science.gov (United States)

    Zhang, Zhiwei; Truong, Son Nguyen; Müller, Rolf

    2009-07-01

    The biosonar system of bats utilizes physical baffle shapes around the sites of ultrasound emission for diffraction-based beam forming. Among these shapes, some extreme cases have evolved that include a long noseleaf protrusion (sella) in a species of horseshoe bat. We have evaluated the acoustic cost function associated with sella length with a computational physics approach and found that the extreme length can be predicted accurately from a fiducial point on this function. This suggests that some extreme cases of biological morphology can be explained from their physical function alone.

  13. Accurate prediction of defect properties in density functional supercell calculations

    International Nuclear Information System (INIS)

    Lany, Stephan; Zunger, Alex

    2009-01-01

    The theoretical description of defects and impurities in semiconductors is largely based on density functional theory (DFT) employing supercell models. The literature discussion of uncertainties that limit the predictivity of this approach has focused mostly on two issues: (1) finite-size effects, in particular for charged defects; (2) the band-gap problem in local or semi-local DFT approximations. We here describe how finite-size effects (1) in the formation energy of charged defects can be accurately corrected in a simple way, i.e. by potential alignment in conjunction with a scaling of the Madelung-like screened first order correction term. The factor involved with this scaling depends only on the dielectric constant and the shape of the supercell, and quite accurately accounts for the full third order correction according to Makov and Payne. We further discuss in some detail the background and justification for this correction method, and also address the effect of the ionic screening on the magnitude of the image charge energy. In regard to (2) the band-gap problem, we discuss the merits of non-local external potentials that are added to the DFT Hamiltonian and allow for an empirical band-gap correction without significantly increasing the computational demand over that of standard DFT calculations. In combination with LDA + U, these potentials are further instrumental for the prediction of polaronic defects with localized holes in anion-p orbitals, such as the metal-site acceptors in wide-gap oxide semiconductors

  14. DeepCpG: accurate prediction of single-cell DNA methylation states using deep learning.

    Science.gov (United States)

    Angermueller, Christof; Lee, Heather J; Reik, Wolf; Stegle, Oliver

    2017-04-11

    Recent technological advances have enabled DNA methylation to be assayed at single-cell resolution. However, current protocols are limited by incomplete CpG coverage and hence methods to predict missing methylation states are critical to enable genome-wide analyses. We report DeepCpG, a computational approach based on deep neural networks to predict methylation states in single cells. We evaluate DeepCpG on single-cell methylation data from five cell types generated using alternative sequencing protocols. DeepCpG yields substantially more accurate predictions than previous methods. Additionally, we show that the model parameters can be interpreted, thereby providing insights into how sequence composition affects methylation variability.

  15. Accurate prediction of peptide binding sites on protein surfaces.

    Directory of Open Access Journals (Sweden)

    Evangelia Petsalaki

    2009-03-01

    Full Text Available Many important protein-protein interactions are mediated by the binding of a short peptide stretch in one protein to a large globular segment in another. Recent efforts have provided hundreds of examples of new peptides binding to proteins for which a three-dimensional structure is available (either known experimentally or readily modeled but where no structure of the protein-peptide complex is known. To address this gap, we present an approach that can accurately predict peptide binding sites on protein surfaces. For peptides known to bind a particular protein, the method predicts binding sites with great accuracy, and the specificity of the approach means that it can also be used to predict whether or not a putative or predicted peptide partner will bind. We used known protein-peptide complexes to derive preferences, in the form of spatial position specific scoring matrices, which describe the binding-site environment in globular proteins for each type of amino acid in bound peptides. We then scan the surface of a putative binding protein for sites for each of the amino acids present in a peptide partner and search for combinations of high-scoring amino acid sites that satisfy constraints deduced from the peptide sequence. The method performed well in a benchmark and largely agreed with experimental data mapping binding sites for several recently discovered interactions mediated by peptides, including RG-rich proteins with SMN domains, Epstein-Barr virus LMP1 with TRADD domains, DBC1 with Sir2, and the Ago hook with Argonaute PIWI domain. The method, and associated statistics, is an excellent tool for predicting and studying binding sites for newly discovered peptides mediating critical events in biology.

  16. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  17. WGS accurately predicts antimicrobial resistance in Escherichia coli.

    Science.gov (United States)

    Tyson, Gregory H; McDermott, Patrick F; Li, Cong; Chen, Yuansha; Tadesse, Daniel A; Mukherjee, Sampa; Bodeis-Jones, Sonya; Kabera, Claudine; Gaines, Stuart A; Loneragan, Guy H; Edrington, Tom S; Torrence, Mary; Harhay, Dayna M; Zhao, Shaohua

    2015-10-01

    The objective of this study was to determine the effectiveness of WGS in identifying resistance genotypes of MDR Escherichia coli and whether these correlate with observed phenotypes. Seventy-six E. coli strains were isolated from farm cattle and measured for phenotypic resistance to 15 antimicrobials with the Sensititre(®) system. Isolates with resistance to at least four antimicrobials in three classes were selected for WGS using an Illumina MiSeq. Genotypic analysis was conducted with in-house Perl scripts using BLAST analysis to identify known genes and mutations associated with clinical resistance. Over 30 resistance genes and a number of resistance mutations were identified among the E. coli isolates. Resistance genotypes correlated with 97.8% specificity and 99.6% sensitivity to the identified phenotypes. The majority of discordant results were attributable to the aminoglycoside streptomycin, whereas there was a perfect genotype-phenotype correlation for most antibiotic classes such as tetracyclines, quinolones and phenicols. WGS also revealed information about rare resistance mechanisms, such as structural mutations in chromosomal copies of ampC conferring third-generation cephalosporin resistance. WGS can provide comprehensive resistance genotypes and is capable of accurately predicting resistance phenotypes, making it a valuable tool for surveillance. Moreover, the data presented here showing the ability to accurately predict resistance suggest that WGS may be used as a screening tool in selecting anti-infective therapy, especially as costs drop and methods improve. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  18. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  19. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  20. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  1. Artificial neural network accurately predicts hepatitis B surface antigen seroclearance.

    Directory of Open Access Journals (Sweden)

    Ming-Hua Zheng

    Full Text Available BACKGROUND & AIMS: Hepatitis B surface antigen (HBsAg seroclearance and seroconversion are regarded as favorable outcomes of chronic hepatitis B (CHB. This study aimed to develop artificial neural networks (ANNs that could accurately predict HBsAg seroclearance or seroconversion on the basis of available serum variables. METHODS: Data from 203 untreated, HBeAg-negative CHB patients with spontaneous HBsAg seroclearance (63 with HBsAg seroconversion, and 203 age- and sex-matched HBeAg-negative controls were analyzed. ANNs and logistic regression models (LRMs were built and tested according to HBsAg seroclearance and seroconversion. Predictive accuracy was assessed with area under the receiver operating characteristic curve (AUROC. RESULTS: Serum quantitative HBsAg (qHBsAg and HBV DNA levels, qHBsAg and HBV DNA reduction were related to HBsAg seroclearance (P<0.001 and were used for ANN/LRM-HBsAg seroclearance building, whereas, qHBsAg reduction was not associated with ANN-HBsAg seroconversion (P = 0.197 and LRM-HBsAg seroconversion was solely based on qHBsAg (P = 0.01. For HBsAg seroclearance, AUROCs of ANN were 0.96, 0.93 and 0.95 for the training, testing and genotype B subgroups respectively. They were significantly higher than those of LRM, qHBsAg and HBV DNA (all P<0.05. Although the performance of ANN-HBsAg seroconversion (AUROC 0.757 was inferior to that for HBsAg seroclearance, it tended to be better than those of LRM, qHBsAg and HBV DNA. CONCLUSIONS: ANN identifies spontaneous HBsAg seroclearance in HBeAg-negative CHB patients with better accuracy, on the basis of easily available serum data. More useful predictors for HBsAg seroconversion are still needed to be explored in the future.

  2. Do risk calculators accurately predict surgical site occurrences?

    Science.gov (United States)

    Mitchell, Thomas O; Holihan, Julie L; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, John Scott; Liang, Mike K

    2016-06-01

    -NSQIP (P < 0.05) stratified for SSI. In both databases, VHRS, VHWG, and Centers for Disease Control and Prevention overestimated risk of SSO and SSI, whereas HW-RAT and ACS-NSQIP underestimated risk for all groups. All five existing predictive models have limited ability to risk-stratify patients and accurately assess risk of SSO. However, both the VHRS and ACS-NSQIP demonstrate modest success in identifying patients at risk for SSI. Continued model refinement is needed to improve the two highest performing models (VHRS and ACS-NSQIP) along with investigation to determine whether modifications to perioperative management based on risk stratification can improve outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. New models for energy beam machining enable accurate generation of free forms.

    Science.gov (United States)

    Axinte, Dragos; Billingham, John; Bilbao Guillerna, Aitor

    2017-09-01

    We demonstrate that, despite differences in their nature, many energy beam controlled-depth machining processes (for example, waterjet, pulsed laser, focused ion beam) can be modeled using the same mathematical framework-a partial differential evolution equation that requires only simple calibrations to capture the physics of each process. The inverse problem can be solved efficiently through the numerical solution of the adjoint problem and leads to beam paths that generate prescribed three-dimensional features with minimal error. The viability of this modeling approach has been demonstrated by generating accurate free-form surfaces using three processes that operate at very different length scales and with different physical principles for material removal: waterjet, pulsed laser, and focused ion beam machining. Our approach can be used to accurately machine materials that are hard to process by other means for scalable applications in a wide variety of industries.

  4. Development of a setup to enable stable and accurate flow conditions for membrane biofouling studies

    KAUST Repository

    Bucs, Szilard

    2015-07-10

    Systematic laboratory studies on membrane biofouling require experimental conditions that are well defined and representative for practice. Hydrodynamics and flow rate variations affect biofilm formation, morphology, and detachment and impacts on membrane performance parameters such as feed channel pressure drop. There is a suite of available monitors to study biofouling, but systems to operate monitors have not been well designed to achieve an accurate, constant water flow required for a reliable determination of biomass accumulation and feed channel pressure drop increase. Studies were done with membrane fouling simulators operated in parallel with manual and automated flow control, with and without dosage of a biodegradable substrate to the feedwater to enhance biofouling rate. High flow rate variations were observed for the manual water flow system (up to ≈9%) compared to the automatic flow control system (<1%). The flow rate variation in the manual system was strongly increased by biofilm accumulation, while the automatic system maintained an accurate and constant water flow in the monitor. The flow rate influences the biofilm accumulation and the impact of accumulated biofilm on membrane performance. The effect of the same amount of accumulated biomass on the pressure drop increase was related to the linear flow velocity. Stable and accurate feedwater flow rates are essential for biofouling studies in well-defined conditions in membrane systems. © 2015 Balaban Desalination Publications. All rights reserved.

  5. Nuclear medicine consultation: A useful tool in primary care to enable more accurate diagnosis

    Science.gov (United States)

    Fakhri, Asif Ali

    2017-01-01

    In the high volume general practitioner setting, there is a need for triaging clinical questions pertaining to specific patients into ones that are readily answered, and those that would more effectively and efficiently be answered by the subspecialist. In this way, the nuclear medicine consultation provides the general practitioner a physiologic imaging specialist's perspective for answering complex clinical questions. A formal nuclear medicine consultation can be a valuable tool in identifying a targeted molecular imaging approach to answering a specific clinical question and can prove useful in both appropriate diagnostic study selection as well as accurate image interpretation. PMID:29302517

  6. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    Science.gov (United States)

    Ralph, Duncan K; Matsen, Frederick A

    2016-01-01

    VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.

  7. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    Directory of Open Access Journals (Sweden)

    Duncan K Ralph

    2016-01-01

    Full Text Available VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion. Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/, is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.

  8. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)

    2014-10-15

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  9. Machine learning and predictive data analytics enabling metrology and process control in IC fabrication

    Science.gov (United States)

    Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.

    2015-03-01

    Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.

  10. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    Directory of Open Access Journals (Sweden)

    Xin Deng

    2015-07-01

    Full Text Available Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.

  11. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions.

    Science.gov (United States)

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-07-07

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.

  12. Accurate Prediction of Coronary Artery Disease Using Bioinformatics Algorithms

    Directory of Open Access Journals (Sweden)

    Hajar Shafiee

    2016-06-01

    Full Text Available Background and Objectives: Cardiovascular disease is one of the main causes of death in developed and Third World countries. According to the statement of the World Health Organization, it is predicted that death due to heart disease will rise to 23 million by 2030. According to the latest statistics reported by Iran’s Minister of health, 3.39% of all deaths are attributed to cardiovascular diseases and 19.5% are related to myocardial infarction. The aim of this study was to predict coronary artery disease using data mining algorithms. Methods: In this study, various bioinformatics algorithms, such as decision trees, neural networks, support vector machines, clustering, etc., were used to predict coronary heart disease. The data used in this study was taken from several valid databases (including 14 data. Results: In this research, data mining techniques can be effectively used to diagnose different diseases, including coronary artery disease. Also, for the first time, a prediction system based on support vector machine with the best possible accuracy was introduced. Conclusion: The results showed that among the features, thallium scan variable is the most important feature in the diagnosis of heart disease. Designation of machine prediction models, such as support vector machine learning algorithm can differentiate between sick and healthy individuals with 100% accuracy.

  13. Predicting accurate absolute binding energies in aqueous solution

    DEFF Research Database (Denmark)

    Jensen, Jan Halborg

    2015-01-01

    Recent predictions of absolute binding free energies of host-guest complexes in aqueous solution using electronic structure theory have been encouraging for some systems, while other systems remain problematic. In this paper I summarize some of the many factors that could easily contribute 1-3 kcal...

  14. Accurate prediction of secondary metabolite gene clusters in filamentous fungi

    DEFF Research Database (Denmark)

    Andersen, Mikael Rørdam; Nielsen, Jakob Blæsbjerg; Klitgaard, Andreas

    2013-01-01

    supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent...

  15. Third trimester ultrasound soft-tissue measurements accurately predicts macrosomia.

    Science.gov (United States)

    Maruotti, Giuseppe Maria; Saccone, Gabriele; Martinelli, Pasquale

    2017-04-01

    To evaluate the accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia. Electronic databases were searched from their inception until September 2015 with no limit for language. We included only studies assessing the accuracy of sonographic measurements of fetal soft tissue in the abdomen or thigh in the prediction of macrosomia  ≥34 weeks of gestation. The primary outcome was the accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia. We generated the forest plot for the pooled sensitivity and specificity with 95% confidence interval (CI). Additionally, summary receiver-operating characteristics (ROC) curves were plotted and the area under the curve (AUC) was also computed to evaluate the overall performance of the diagnostic test accuracy. Three studies, including 287 singleton gestations, were analyzed. The pooled sensitivity of sonographic measurements of abdominal or thigh fetal soft tissue in the prediction of macrosomia was 80% (95% CI: 66-89%) and the pooled specificity was 95% (95% CI: 91-97%). The AUC for diagnostic accuracy of sonographic measurements of fetal soft tissue in the prediction of macrosomia was 0.92 and suggested high diagnostic accuracy. Third-trimester sonographic measurements of fetal soft tissue after 34 weeks may help to detect macrosomia with a high degree of accuracy. The pooled detection rate was 80%. A standardization of measurements criteria, reproducibility, building reference charts of fetal subcutaneous tissue and large studies to assess the optimal cutoff of fetal adipose thickness are necessary before the introduction of fetal soft-tissue markers in the clinical practice.

  16. Change in BMI accurately predicted by social exposure to acquaintances.

    Directory of Open Access Journals (Sweden)

    Rahman O Oloritun

    Full Text Available Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC and R(2. This study found a model that explains 68% (p<0.0001 of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as

  17. Are we accurately predicting bladder capacity in infants?

    Science.gov (United States)

    Costa, Daniel F.G.; Lavallée, Luke T.; Dubois, Claude; Leonard, Michael; Guerra, Luis

    2014-01-01

    Introduction: Estimating bladder capacity is an important component in the evaluation of many urological disorders. For estimates to be of clinical value, precise reference ranges are needed. While accepted reference ranges have been established in adults and older children, none have been validated in infants. We endeavour to determine the normal bladder capacity of children less than 1 year of age. Methods: We retrospectively reviewed the charts of children aged 0 to 12 months with cutaneous stigmata of spinal dysraphism who were referred to the urology clinic to rule out tethered cord between October 2004 and July 2011. Patients with normal urologic assessment, who did not have surgery during the time they were followed, were included in the study cohort. Urodynamic studies were performed using the Laborie Medical Technologies UDS-600. Bladder filling occurred via a catheter at a rate of 10% of the expected total bladder capacity/minute. Bladder capacity was defined as the volume of filling when the child voided around the catheter. We collected data, including age at urodynamics, bladder capacity, detrusor pressure at capacity, bladder compliance and length of follow-up. Result: In total, 46% (84/183) of patients had a normal urologic assessment and met the inclusion criteria. The median age was 9.0 months (interquartile range [IQR] 6.8–11.0). The average bladder capacity was 48.9 mL (standard deviation [SD] 32.8) and the mean detrusor pressure at capacity was 8.5 cmH2O (SD 10.0). Mean compliance was 14.1 mL/cmH2O (SD 13.6). The average length of follow-up was 40.7 months (SD 26.2) and during this interval no patients were found to have urologic or neurologic abnormalities and none underwent tethered cord release. Conclusion: Bladder capacity in infants with a median age of 9.0 months was found to be 48.9 mL. This is less than half of the volume predicted by a commonly employed formula. A novel method of estimating bladder capacity in infants is required

  18. Tumor-Specific Fluorescent Antibody Imaging Enables Accurate Staging Laparoscopy in an Orthotopic Model of Pancreatic Cancer

    Science.gov (United States)

    Cao, Hop S Tran; Kaushal, Sharmeela; Metildi, Cristina A; Menen, Rhiana S; Lee, Claudia; Snyder, Cynthia S; Messer, Karen; Pu, Minya; Luiken, George A; Talamini, Mark A; Hoffman, Robert M; Bouvet, Michael

    2014-01-01

    Background/Aims Laparoscopy is important in staging pancreatic cancer, but false negatives remain problematic. Making tumors fluorescent has the potential to improve the accuracy of staging laparoscopy. Methodology Orthotopic and carcinomatosis models of pancreatic cancer were established with BxPC-3 human pancreatic cancer cells in nude mice. Alexa488-anti-CEA conjugates were injected via tail vein 24 hours prior to laparoscopy. Mice were examined under bright field laparoscopic (BL) and fluorescence laparoscopic (FL) modes. Outcomes measured included time to identification of primary tumor for the orthotopic model and number of metastases identified within 2 minutes for the carcinomatosis model. Results FL enabled more rapid and accurate identification and localization of primary tumors and metastases than BL. Using BL took statistically significantly longer time than FL. More metastatic lesions were detected and localized under FL compared to BL and with greater accuracy, with sensitivities of 96% vs. 40%, respectively, when compared to control. FL was sensitive enough to detect metastatic lesions laparoscopy with tumors labeled with fluorophore-conjugated anti-CEA antibody permits rapid detection and accurate localization of primary and metastatic pancreatic cancer in an orthotopic model. The results of the present report demonstrate the future clinical potential of fluorescence laparoscopy. PMID:22369743

  19. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    Science.gov (United States)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    printability of defects at wafer level and automates the process of defect dispositioning from images captured using high resolution inspection machine. It first eliminates false defects due to registration, focus errors, image capture errors and random noise caused during inspection. For the remaining real defects, actual mask-like contours are generated using the Calibre® ILT solution [1][2], which is enhanced to predict the actual mask contours from high resolution defect images. It enables accurate prediction of defect contours, which is not possible from images captured using inspection machine because some information is already lost due to optical effects. Calibre's simulation engine is used to generate images at wafer level using scanner optical conditions and mask-like contours as input. The tool then analyses simulated images and predicts defect printability. It automatically calculates maximum CD variation and decides which defects are severe to affect patterns on wafer. In this paper, we assess the printability of defects for the mask of advanced technology nodes. In particular, we will compare the recovered mask contours with contours extracted from SEM image of the mask and compare simulation results with AIMSTM for a variety of defects and patterns. The results of printability assessment and the accuracy of comparison are presented in this paper. We also suggest how this method can be extended to predict printability of defects identified on EUV photomasks.

  20. Technologies That Enable Accurate and Precise Nano- to Milliliter-Scale Liquid Dispensing of Aqueous Reagents Using Acoustic Droplet Ejection.

    Science.gov (United States)

    Sackmann, Eric K; Majlof, Lars; Hahn-Windgassen, Annett; Eaton, Brent; Bandzava, Temo; Daulton, Jay; Vandenbroucke, Arne; Mock, Matthew; Stearns, Richard G; Hinkson, Stephen; Datwani, Sammy S

    2016-02-01

    Acoustic liquid handling uses high-frequency acoustic signals that are focused on the surface of a fluid to eject droplets with high accuracy and precision for various life science applications. Here we present a multiwell source plate, the Echo Qualified Reservoir (ER), which can acoustically transfer over 2.5 mL of fluid per well in 25-nL increments using an Echo 525 liquid handler. We demonstrate two Labcyte technologies-Dynamic Fluid Analysis (DFA) methods and a high-voltage (HV) grid-that are required to maintain accurate and precise fluid transfers from the ER at this volume scale. DFA methods were employed to dynamically assess the energy requirements of the fluid and adjust the acoustic ejection parameters to maintain a constant velocity droplet. Furthermore, we demonstrate that the HV grid enhances droplet velocity and coalescence at the destination plate. These technologies enabled 5-µL per destination well transfers to a 384-well plate, with accuracy and precision values better than 4%. Last, we used the ER and Echo 525 liquid handler to perform a quantitative polymerase chain reaction (qPCR) assay to demonstrate an application that benefits from the flexibility and larger volume capabilities of the ER. © 2015 Society for Laboratory Automation and Screening.

  1. Genome-enabled predictions for binomial traits in sugar beet populations.

    Science.gov (United States)

    Biscarini, Filippo; Stevanato, Piergiorgio; Broccanello, Chiara; Stella, Alessandra; Saccomani, Massimo

    2014-07-22

    Genomic information can be used to predict not only continuous but also categorical (e.g. binomial) traits. Several traits of interest in human medicine and agriculture present a discrete distribution of phenotypes (e.g. disease status). Root vigor in sugar beet (B. vulgaris) is an example of binomial trait of agronomic importance. In this paper, a panel of 192 SNPs (single nucleotide polymorphisms) was used to genotype 124 sugar beet individual plants from 18 lines, and to classify them as showing "high" or "low" root vigor. A threshold model was used to fit the relationship between binomial root vigor and SNP genotypes, through the matrix of genomic relationships between individuals in a genomic BLUP (G-BLUP) approach. From a 5-fold cross-validation scheme, 500 testing subsets were generated. The estimated average cross-validation error rate was 0.000731 (0.073%). Only 9 out of 12326 test observations (500 replicates for an average test set size of 24.65) were misclassified. The estimated prediction accuracy was quite high. Such accurate predictions may be related to the high estimated heritability for root vigor (0.783) and to the few genes with large effect underlying the trait. Despite the sparse SNP panel, there was sufficient within-scaffold LD where SNPs with large effect on root vigor were located to allow for genome-enabled predictions to work.

  2. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    Science.gov (United States)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  3. Influential Factors for Accurate Load Prediction in a Demand Response Context

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Kjærgaard, Mikkel Baun; Jørgensen, Bo Nørregaard

    2016-01-01

    Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence....... Next, the time of day that is being predicted greatly influence the prediction which is related to the weather pattern. By presenting these results we hope to improve the modeling of building loads and algorithms for Demand Response planning.......Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence...

  4. Heart rate during basketball game play and volleyball drills accurately predicts oxygen uptake and energy expenditure.

    Science.gov (United States)

    Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J

    2015-09-01

    There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.

  5. Rtips: fast and accurate tools for RNA 2D structure prediction using integer programming.

    Science.gov (United States)

    Kato, Yuki; Sato, Kengo; Asai, Kiyoshi; Akutsu, Tatsuya

    2012-07-01

    We present a web-based tool set Rtips for fast and accurate prediction of RNA 2D complex structures. Rtips comprises two computational tools based on integer programming, IPknot for predicting RNA secondary structures with pseudoknots and RactIP for predicting RNA-RNA interactions with kissing hairpins. Both servers can run much faster than existing services with the same purpose on large data sets as well as being at least comparable in prediction accuracy. The Rtips web server along with the stand-alone programs is freely accessible at http://rna.naist.jp/.

  6. Complete Soil Texture is Accurately Predicted by Visible Near-Infrared Spectroscopy

    DEFF Research Database (Denmark)

    Hermansen, Cecilie; Knadel, Maria; Møldrup, Per

    2017-01-01

    Core Ideas: Two PSC models are fitted to detailed measurements of clay, silt, and sand fractions.Both models well describe the PSCs of a broad soil data base.Within and between field variations in PSC and OM are well predicted by vis-NIRS.The Fredlund model performs slightly better in data......-fitting and vis-NIRS predicted PSCs.New vis-NIRS concept enables soil type classification in any texture system worldwide....

  7. A more accurate method of predicting soft tissue changes after mandibular setback surgery.

    Science.gov (United States)

    Suh, Hee-Yeon; Lee, Shin-Jae; Lee, Yun-Sik; Donatelli, Richard E; Wheeler, Timothy T; Kim, Soo-Hwan; Eo, Soo-Heang; Seo, Byoung-Moo

    2012-10-01

    To propose a more accurate method to predict the soft tissue changes after orthognathic surgery. The subjects included 69 patients who had undergone surgical correction of Class III mandibular prognathism by mandibular setback. Two multivariate methods of forming prediction equations were examined using 134 predictor and 36 soft tissue response variables: the ordinary least-squares (OLS) and the partial least-squares (PLS) methods. After fitting the equation, the bias and a mean absolute prediction error were calculated. To evaluate the predictive performance of the prediction equations, a 10-fold cross-validation method was used. The multivariate PLS method showed significantly better predictive performance than the conventional OLS method. The bias pattern was more favorable and the absolute prediction accuracy was significantly better with the PLS method than with the OLS method. The multivariate PLS method was more satisfactory than the conventional OLS method in accurately predicting the soft tissue profile change after Class III mandibular setback surgery. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Accurate prediction of adsorption energies on graphene, using a dispersion-corrected semiempirical method including solvation.

    Science.gov (United States)

    Vincent, Mark A; Hillier, Ian H

    2014-08-25

    The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.

  9. Accurate disulfide-bonding network predictions improve ab initio structure prediction of cysteine-rich proteins.

    Science.gov (United States)

    Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin

    2015-12-01

    Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g., >3 bonds, is too low to effectively assist structure assembly simulations. We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ zhng@umich.edu or hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. LocARNA-P: Accurate boundary prediction and improved detection of structural RNAs

    DEFF Research Database (Denmark)

    Will, Sebastian; Joshi, Tejal; Hofacker, Ivo L.

    2012-01-01

    Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing......ARNA/LocARNA-P, and the software package, including documentation and a pipeline for refining screens for structural ncRNA, at http://www.bioinf.uni-freiburg.de/Supplements/LocARNA-P/.......Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing...... methods struggle with these goals because they rely on sequence-based multiple sequence alignments, which regularly misalign RNA structure and therefore do not support identification of structural similarities. To overcome this limitation, we compute columnwise and global reliabilities of alignments based...

  11. Accurate wavelength prediction of photonic crystal resonant reflection and applications in refractive index measurement

    DEFF Research Database (Denmark)

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron L. C.

    2014-01-01

    and superstrate materials. The importance of accounting for material dispersion in order to obtain accurate simulation results is highlighted, and a method for doing so using an iterative approach is demonstrated. Furthermore, an application for the model is demonstrated, in which the material dispersion......In the past decade, photonic crystal resonant reflectors have been increasingly used as the basis for label-free biochemical assays in lab-on-a-chip applications. In both designing and interpreting experimental results, an accurate model describing the optical behavior of such structures...... is essential. Here, an analytical method for precisely predicting the absolute positions of resonantly reflected wavelengths is presented. The model is experimentally verified to be highly accurate using nanoreplicated, polymer-based photonic crystal grating reflectors with varying grating periods...

  12. How Accurately do Leading and Lagging Indictors Predict F-16 Aircraft Availability (AA)

    Science.gov (United States)

    2016-08-01

    Command Instruction 21-118, 2 August 2012, Logistics Maintenance Performance Indicator Reporting Procedures, p 41 para A2.3.1. 5 ibid 1., p 3. 6 ibid., 2...Algorithms, Volume 4, 2013. [2] Air Combat Command Instruction 21-118, Logistics Maintenance Performance Indicator Reporting Procedures, 2012. [3] Air...AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY How accurately do leading and lagging indicators predict F-16 aircraft availability (AA

  13. Rapid and accurate prediction and scoring of water molecules in protein binding sites.

    Directory of Open Access Journals (Sweden)

    Gregory A Ross

    Full Text Available Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.

  14. Rapid and Accurate Prediction and Scoring of Water Molecules in Protein Binding Sites

    Science.gov (United States)

    Ross, Gregory A.; Morris, Garrett M.; Biggin, Philip C.

    2012-01-01

    Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity. PMID:22396746

  15. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    Science.gov (United States)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  16. Can Triage Nurses Accurately Predict Patient Dispositions in the Emergency Department?

    Science.gov (United States)

    Alexander, Danette; Abbott, Lincoln; Zhou, Qiuping; Staff, Ilene

    2016-11-01

    Contemporary emergency departments experience crowded conditions with poor patient outcomes. If triage nurses could accurately predict admission, one theoretical intervention to reduce crowding is to place patients in the admission cue on arrival to the emergency department. The purpose of this study was to determine if triage nurses could accurately predict patient dispositions. This prospective study was conducted in a tertiary academic hospital's emergency department using a data collection tool embedded in the ED electronic information system. Study variables included the predicted and actual disposition, as well as level of care, gender, age, and Emergency Severity Index level. Data were collected for 28 consecutive days from September 17 through October 9, 2013. Sensitivity and specificity, positive and negative predictive values, and accuracy of prediction, as well as the associations between patient characteristics and nurse prediction, were calculated. A total of 5,135 cases were included in the analysis. The triage nurses predicted admissions with a sensitivity of 71.5% and discharges with a specificity of 88.0%. Accuracy was significantly higher for younger patients and for patients at very low or very high severity levels. Although the ability to predict admissions at triage by nurses was not adequate to support a change in the bed procurement process, a specificity of 88.0% could have implications for rapid ED discharges or other low-acuity processes designed within the emergency department. Further studies in additional settings and on alternative interventions are needed. Copyright © 2016 Emergency Nurses Association. Published by Elsevier Inc. All rights reserved.

  17. RactIP: fast and accurate prediction of RNA-RNA interaction using integer programming.

    Science.gov (United States)

    Kato, Yuki; Sato, Kengo; Hamada, Michiaki; Watanabe, Yoshihide; Asai, Kiyoshi; Akutsu, Tatsuya

    2010-09-15

    Considerable attention has been focused on predicting RNA-RNA interaction since it is a key to identifying possible targets of non-coding small RNAs that regulate gene expression post-transcriptionally. A number of computational studies have so far been devoted to predicting joint secondary structures or binding sites under a specific class of interactions. In general, there is a trade-off between range of interaction type and efficiency of a prediction algorithm, and thus efficient computational methods for predicting comprehensive type of interaction are still awaited. We present RactIP, a fast and accurate prediction method for RNA-RNA interaction of general type using integer programming. RactIP can integrate approximate information on an ensemble of equilibrium joint structures into the objective function of integer programming using posterior internal and external base-paring probabilities. Experimental results on real interaction data show that prediction accuracy of RactIP is at least comparable to that of several state-of-the-art methods for RNA-RNA interaction prediction. Moreover, we demonstrate that RactIP can run incomparably faster than competitive methods for predicting joint secondary structures. RactIP is implemented in C++, and the source code is available at http://www.ncrna.org/software/ractip/.

  18. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.

  19. Highly Accurate Structure-Based Prediction of HIV-1 Coreceptor Usage Suggests Intermolecular Interactions Driving Tropism.

    Science.gov (United States)

    Kieslich, Chris A; Tamamis, Phanourios; Guzman, Yannis A; Onel, Melis; Floudas, Christodoulos A

    2016-01-01

    HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/.

  20. More accurate recombination prediction in HIV-1 using a robust decoding algorithm for HMMs

    Directory of Open Access Journals (Sweden)

    Brown Daniel G

    2011-05-01

    Full Text Available Abstract Background Identifying recombinations in HIV is important for studying the epidemiology of the virus and aids in the design of potential vaccines and treatments. The previous widely-used tool for this task uses the Viterbi algorithm in a hidden Markov model to model recombinant sequences. Results We apply a new decoding algorithm for this HMM that improves prediction accuracy. Exactly locating breakpoints is usually impossible, since different subtypes are highly conserved in some sequence regions. Our algorithm identifies these sites up to a certain error tolerance. Our new algorithm is more accurate in predicting the location of recombination breakpoints. Our implementation of the algorithm is available at http://www.cs.uwaterloo.ca/~jmtruszk/jphmm_balls.tar.gz. Conclusions By explicitly accounting for uncertainty in breakpoint positions, our algorithm offers more reliable predictions of recombination breakpoints in HIV-1. We also document a new domain of use for our new decoding approach in HMMs.

  1. More accurate recombination prediction in HIV-1 using a robust decoding algorithm for HMMs.

    Science.gov (United States)

    Truszkowski, Jakub; Brown, Daniel G

    2011-05-17

    Identifying recombinations in HIV is important for studying the epidemiology of the virus and aids in the design of potential vaccines and treatments. The previous widely-used tool for this task uses the Viterbi algorithm in a hidden Markov model to model recombinant sequences. We apply a new decoding algorithm for this HMM that improves prediction accuracy. Exactly locating breakpoints is usually impossible, since different subtypes are highly conserved in some sequence regions. Our algorithm identifies these sites up to a certain error tolerance. Our new algorithm is more accurate in predicting the location of recombination breakpoints. Our implementation of the algorithm is available at http://www.cs.uwaterloo.ca/~jmtruszk/jphmm_balls.tar.gz. By explicitly accounting for uncertainty in breakpoint positions, our algorithm offers more reliable predictions of recombination breakpoints in HIV-1. We also document a new domain of use for our new decoding approach in HMMs.

  2. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    Science.gov (United States)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  3. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    International Nuclear Information System (INIS)

    Amini, Y; Emdad, H; Farid, M

    2014-01-01

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  4. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    International Nuclear Information System (INIS)

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; Lilienfeld, O. Anatole von; Müller, Klaus-Robert; Tkatchenko, Alexandre

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the 'holy grail' of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies

  5. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Energy Technology Data Exchange (ETDEWEB)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  6. Accurate prediction of emission energies with TD-DFT methods for platinum and iridium OLED materials.

    Science.gov (United States)

    Morello, Glenn R

    2017-06-01

    Accurate prediction of triplet excitation energies for transition metal complexes has proven to be a difficult task when confronted with a variety of metal centers and ligand types. Specifically, phosphorescent transition metal light emitters, typically based on iridium or platinum, often give calculated results of varying accuracy when compared to experimentally determined T1 emission values. Developing a computational protocol for reliably calculating OLED emission energies will allow for the prediction of a complex's color prior to synthesis, saving time and resources in the laboratory. A comprehensive investigation into the dependence of the DFT functional, basis set, and solvent model is presented here, with the aim of identifying an accurate method while remaining computationally cost-effective. A protocol that uses TD-DFT excitation energies on ground-state geometries was used to predict triplet emission values of 34 experimentally characterized complexes, using a combination of gas phase B3LYP/LANL2dz for optimization and B3LYP/CEP-31G/PCM(THF) for excitation energies. Results show excellent correlation with experimental emission values of iridium and platinum complexes for a wide range of emission energies. The set of complexes tested includes neutral and charged complexes, as well as a variety of different ligand types.

  7. The MIDAS touch for Accurately Predicting the Stress-Strain Behavior of Tantalum

    Energy Technology Data Exchange (ETDEWEB)

    Jorgensen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-02

    Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].

  8. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    Science.gov (United States)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  9. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    Science.gov (United States)

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  10. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    Science.gov (United States)

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Observational cohort study. Academic medical center from January 2013 to December 2015. Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. None. High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the

  11. Fast and accurate covalent bond predictions using perturbation theory in chemical space

    Science.gov (United States)

    Chang, Kuang-Yu; von Lilienfeld, Anatole

    I will discuss the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among systems of different chemical composition. We have investigated single, double, and triple bonds occurring in small sets of iso-valence-electronic molecular species with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order estimates of covalent bonding potentials can achieve chemical accuracy (within 1 kcal/mol) if the alchemical interpolation is vertical (fixed geometry) among chemical elements from third and fourth row of the periodic table. When applied to nonbonded systems of molecular dimers or solids such as III-V semiconductors, alanates, alkali halides, and transition metals, similar observations hold, enabling rapid predictions of van der Waals energies, defect energies, band-structures, crystal structures, and lattice constants.

  12. Microbiome Data Accurately Predicts the Postmortem Interval Using Random Forest Regression Models

    Directory of Open Access Journals (Sweden)

    Aeriel Belk

    2018-02-01

    Full Text Available Death investigations often include an effort to establish the postmortem interval (PMI in cases in which the time of death is uncertain. The postmortem interval can lead to the identification of the deceased and the validation of witness statements and suspect alibis. Recent research has demonstrated that microbes provide an accurate clock that starts at death and relies on ecological change in the microbial communities that normally inhabit a body and its surrounding environment. Here, we explore how to build the most robust Random Forest regression models for prediction of PMI by testing models built on different sample types (gravesoil, skin of the torso, skin of the head, gene markers (16S ribosomal RNA (rRNA, 18S rRNA, internal transcribed spacer regions (ITS, and taxonomic levels (sequence variants, species, genus, etc.. We also tested whether particular suites of indicator microbes were informative across different datasets. Generally, results indicate that the most accurate models for predicting PMI were built using gravesoil and skin data using the 16S rRNA genetic marker at the taxonomic level of phyla. Additionally, several phyla consistently contributed highly to model accuracy and may be candidate indicators of PMI.

  13. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction.

    Science.gov (United States)

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H

    2017-01-09

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Microarray Я US: a user-friendly graphical interface to Bioconductor tools that enables accurate microarray data analysis and expedites comprehensive functional analysis of microarray results

    Directory of Open Access Journals (Sweden)

    Dai Yilin

    2012-06-01

    Full Text Available Abstract Background Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. Findings We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Conclusion Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.

  15. Third generation dual-source CT enables accurate diagnosis of coronary restenosis in all size stents with low radiation dose and preserved image quality.

    Science.gov (United States)

    Li, Yuehua; Yu, Mengmeng; Li, Wenbin; Lu, Zhigang; Wei, Meng; Zhang, Jiayin

    2018-01-18

    To investigate the diagnostic performance of low dose stent imaging in patients with large (≥ 3 mm) and small (source CT. Symptomatic patients suspected of having in-stent restenosis (ISR) were prospectively enrolled. Coronary computed tomography angiography (CCTA) and invasive coronary angiography (ICA) were performed within 1 month for correlation. Binary ISR was defined as an in-stent neointimal proliferation with diameter stenosis ≥ 50%. The radiation dose and image quality of CCTA were also assessed. Sixty-nine patients with 140 stents were ultimately included for analysis. The mean total radiation dose of CCTA was 1.3 ± 0.72 mSv in all patients and 0.95 ± 0.17 mSv in patients with high pitch acquisition. The overall diagnostic accuracy of CCTA stent imaging of patient-based, lesion-based and stent-based analysis was 95.7%, 94.1% and 94.3%, respectively. Further, the diagnostic accuracy of CCTA in the small calibre stent group (diameter source CT enables accurate diagnosis of coronary ISR of both large and small calibre stents. Low radiation dose could be achieved with preserved image quality. • Third-generation DSCT enables accurate diagnosis of coronary ISR of all size stents. • Low radiation dose could be achieved with preserved image quality. • The diagnostic accuracy of CCTA of small calibre stents was 88.5%.

  16. Fast and accurate prediction of proton affinities: revisiting the extended Koopmans' theorem for protons.

    Science.gov (United States)

    Pedraza-González, Laura; Charry, Jorge; Quintero, William; Alí-Torres, Jorge; Reyes, Andrés

    2017-09-27

    In this work we propose schemes based on the extended Koopmans' theorem for quantum nuclei (eKT), in the framework of the any particle molecular orbital approach (APMO/KT), for the quantitative prediction of gas phase proton affinities (PAs). The performance of these schemes has been tested on a set of 300 organic molecules containing diverse functional groups. The APMO/KT scheme scaled by functional group (APMO/KT-SC-FG) displays an overall mean absolute error of 1.1 kcal mol -1 with respect to experimental data. Its performance in PA calculations is similar to that of post-Hartree-Fock composite methods or that of the APMO second order proton propagator (APMO/PP2) approach. The APMO/KT-SC-FG scheme is also employed to predict PAs of polyfunctional molecules such as the Nerve Agent VX and the 20 common α-amino acids, finding excellent agreement with available theoretical and/or experimental data. The accuracy of the predictions demonstrates that the APMO/KT-SC-FG scheme is a low-cost alternative to adiabatic methods for the calculation of accurate PAs. One of the most appealing features of the APMO/KT-SC-FG scheme, is that PAs can be derived from one single-point APMO Hartree-Fock calculation.

  17. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  18. Improvement of a land surface model for accurate prediction of surface energy and water balances

    International Nuclear Information System (INIS)

    Katata, Genki

    2009-02-01

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  19. Watershed area ratio accurately predicts daily streamflow in nested catchments in the Catskills, New York

    Directory of Open Access Journals (Sweden)

    Chris C. Gianfagna

    2015-09-01

    New hydrological insights for the region: Watershed area ratio was the most important basin parameter for estimating flow at upstream sites based on downstream flow. The area ratio alone explained 93% of the variance in the slopes of relationships between upstream and downstream flows. Regression analysis indicated that flow at any upstream point can be estimated by multiplying the flow at a downstream reference gage by the watershed area ratio. This method accurately predicted upstream flows at area ratios as low as 0.005. We also observed a very strong relationship (R2 = 0.79 between area ratio and flow–flow slopes in non-nested catchments. Our results indicate that a simple flow estimation method based on watershed area ratios is justifiable, and indeed preferred, for the estimation of daily streamflow in ungaged watersheds in the Catskills region.

  20. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Matthew Almond Sochor

    2014-07-01

    Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.

  1. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    Science.gov (United States)

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  2. A machine learned classifier that uses gene expression data to accurately predict estrogen receptor status.

    Directory of Open Access Journals (Sweden)

    Meysam Bastani

    Full Text Available BACKGROUND: Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. METHODS: To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. RESULTS: This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. CONCLUSIONS: Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions.

  3. Highly accurate prediction of food challenge outcome using routinely available clinical data.

    Science.gov (United States)

    DunnGalvin, Audrey; Daly, Deirdre; Cullinane, Claire; Stenke, Emily; Keeton, Diane; Erlewyn-Lajeunesse, Mich; Roberts, Graham C; Lucas, Jane; Hourihane, Jonathan O'B

    2011-03-01

    Serum specific IgE or skin prick tests are less useful at levels below accepted decision points. We sought to develop and validate a model to predict food challenge outcome by using routinely collected data in a diverse sample of children considered suitable for food challenge. The proto-algorithm was generated by using a limited data set from 1 service (phase 1). We retrospectively applied, evaluated, and modified the initial model by using an extended data set in another center (phase 2). Finally, we prospectively validated the model in a blind study in a further group of children undergoing food challenge for peanut, milk, or egg in the second center (phase 3). Allergen-specific models were developed for peanut, egg, and milk. Phase 1 (N = 429) identified 5 clinical factors associated with diagnosis of food allergy by food challenge. In phase 2 (N = 289), we examined the predictive ability of 6 clinical factors: skin prick test, serum specific IgE, total IgE minus serum specific IgE, symptoms, sex, and age. In phase 3 (N = 70), 97% of cases were accurately predicted as positive and 94% as negative. Our model showed an advantage in clinical prediction compared with serum specific IgE only, skin prick test only, and serum specific IgE and skin prick test (92% accuracy vs 57%, and 81%, respectively). Our findings have implications for the improved delivery of food allergy-related health care, enhanced food allergy-related quality of life, and economized use of health service resources by decreasing the number of food challenges performed. Copyright © 2011 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  4. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  5. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Science.gov (United States)

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  6. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Directory of Open Access Journals (Sweden)

    Matias I Maturana

    2016-04-01

    Full Text Available Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants. Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF, i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  7. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    Energy Technology Data Exchange (ETDEWEB)

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  8. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    Science.gov (United States)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  9. Accurate and robust genomic prediction of celiac disease using statistical learning.

    Directory of Open Access Journals (Sweden)

    Gad Abraham

    2014-02-01

    Full Text Available Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS method. Celiac disease (CD, a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87-0.89 and in independent replication across cohorts (AUC of 0.86-0.9, despite differences in ethnicity. The models explained 30-35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.

  10. Does the emergency surgery score accurately predict outcomes in emergent laparotomies?

    Science.gov (United States)

    Peponis, Thomas; Bohnen, Jordan D; Sangji, Naveen F; Nandan, Anirudh R; Han, Kelsey; Lee, Jarone; Yeh, D Dante; de Moya, Marc A; Velmahos, George C; Chang, David C; Kaafarani, Haytham M A

    2017-08-01

    The emergency surgery score is a mortality-risk calculator for emergency general operation patients. We sought to examine whether the emergency surgery score predicts 30-day morbidity and mortality in a high-risk group of patients undergoing emergent laparotomy. Using the 2011-2012 American College of Surgeons National Surgical Quality Improvement Program database, we identified all patients who underwent emergent laparotomy using (1) the American College of Surgeons National Surgical Quality Improvement Program definition of "emergent," and (2) all Current Procedural Terminology codes denoting a laparotomy, excluding aortic aneurysm rupture. Multivariable logistic regression analyses were performed to measure the correlation (c-statistic) between the emergency surgery score and (1) 30-day mortality, and (2) 30-day morbidity after emergent laparotomy. As sensitivity analyses, the correlation between the emergency surgery score and 30-day mortality was also evaluated in prespecified subgroups based on Current Procedural Terminology codes. A total of 26,410 emergent laparotomy patients were included. Thirty-day mortality and morbidity were 10.2% and 43.8%, respectively. The emergency surgery score correlated well with mortality (c-statistic = 0.84); scores of 1, 11, and 22 correlated with mortalities of 0.4%, 39%, and 100%, respectively. Similarly, the emergency surgery score correlated well with morbidity (c-statistic = 0.74); scores of 0, 7, and 11 correlated with complication rates of 13%, 58%, and 79%, respectively. The morbidity rates plateaued for scores higher than 11. Sensitivity analyses demonstrated that the emergency surgery score effectively predicts mortality in patients undergoing emergent (1) splenic, (2) gastroduodenal, (3) intestinal, (4) hepatobiliary, or (5) incarcerated ventral hernia operation. The emergency surgery score accurately predicts outcomes in all types of emergent laparotomy patients and may prove valuable as a bedside decision

  11. How accurate is anatomic limb alignment in predicting mechanical limb alignment after total knee arthroplasty?

    Science.gov (United States)

    Lee, Seung Ah; Choi, Sang-Hee; Chang, Moon Jong

    2015-10-27

    Anatomic limb alignment often differs from mechanical limb alignment after total knee arthroplasty (TKA). We sought to assess the accuracy, specificity, and sensitivity for each of three commonly used ranges for anatomic limb alignment (3-9°, 5-10° and 2-10°) in predicting an acceptable range (neutral ± 3°) for mechanical limb alignment after TKA. We also assessed whether the accuracy of anatomic limb alignment was affected by anatomic variation. This retrospective study included 314 primary TKAs. The alignment of the limb was measured with both anatomic and mechanical methods of measurement. We also measured anatomic variation, including the femoral bowing angle, tibial bowing angle, and neck-shaft angle of the femur. All angles were measured on the same full-length standing anteroposterior radiographs. The accuracy, specificity, and sensitivity for each range of anatomic limb alignment were calculated and compared using mechanical limb alignment as the reference standard. The associations between the accuracy of anatomic limb alignment and anatomic variation were also determined. The range of 2-10° for anatomic limb alignment showed the highest accuracy, but it was only 73 % (3-9°, 65 %; 5-10°, 67 %). The specificity of the 2-10° range was 81 %, which was higher than that of the other ranges (3-9°, 69 %; 5-10°, 67 %). However, the sensitivity of the 2-10° range to predict varus malalignment was only 16 % (3-9°, 35 %; 5-10°, 68 %). In addition, the sensitivity of the 2-10° range to predict valgus malalignment was only 43 % (3-9°, 71 %; 5-10°, 43 %). The accuracy of anatomical limb alignment was lower for knees with greater femoral (odds ratio = 1.2) and tibial (odds ratio = 1.2) bowing. Anatomic limb alignment did not accurately predict mechanical limb alignment after TKA, and its accuracy was affected by anatomic variation. Thus, alignment after TKA should be assessed by measuring mechanical alignment rather than anatomic

  12. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    Science.gov (United States)

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  13. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Massaine Bandeira e Sousa

    2017-06-01

    Full Text Available Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1 single-environment, main genotypic effect model (SM; (2 multi-environment, main genotypic effects model (MM; (3 multi-environment, single variance G×E deviation model (MDs; and (4 multi-environment, environment-specific variance G×E deviation model (MDe. Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB, and a nonlinear kernel Gaussian kernel (GK. The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets, having different numbers of maize hybrids evaluated in different environments for grain yield (GY, plant height (PH, and ear height (EH. Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied.

  14. Accurate, predictable, repeatable micro-assembly technology for polymer, microfluidic modules.

    Science.gov (United States)

    Lee, Tae Yoon; Han, Kyudong; Barrett, Dwhyte O; Park, Sunggook; Soper, Steven A; Murphy, Michael C

    2018-01-01

    A method for the design, construction, and assembly of modular, polymer-based, microfluidic devices using simple micro-assembly technology was demonstrated to build an integrated fluidic system consisting of vertically stacked modules for carrying out multi-step molecular assays. As an example of the utility of the modular system, point mutation detection using the ligase detection reaction (LDR) following amplification by the polymerase chain reaction (PCR) was carried out. Fluid interconnects and standoffs ensured that temperatures in the vertically stacked reactors were within ± 0.2 C° at the center of the temperature zones and ± 1.1 C° overall. The vertical spacing between modules was confirmed using finite element models (ANSYS, Inc., Canonsburg, PA) to simulate the steady-state temperature distribution for the assembly. Passive alignment structures, including a hemispherical pin-in-hole, a hemispherical pin-in-slot, and a plate-plate lap joint, were developed using screw theory to enable accurate exactly constrained assembly of the microfluidic reactors, cover sheets, and fluid interconnects to facilitate the modular approach. The mean mismatch between the centers of adjacent through holes was 64 ± 7.7 μm, significantly reducing the dead volume necessary to accommodate manufacturing variation. The microfluidic components were easily assembled by hand and the assembly of several different configurations of microfluidic modules for executing the assay was evaluated. Temperatures were measured in the desired range in each reactor. The biochemical performance was comparable to that obtained with benchtop instruments, but took less than 45 min to execute, half the time.

  15. A Novel Calculator for Esophageal Adenocarcinoma Accurately Predicts Overall Survival Benefit from Neoadjuvant Chemoradiation

    Science.gov (United States)

    Gabriel, Emmanuel; Attwood, Kristopher; Shah, Rupen; Hochwald, Steven; Kukar, Moshim; Nurkin, Steven

    2018-01-01

    Introduction Our group recently published that patients with clinically node negative (cN−) esophageal adenocarcinoma do not derive an overall survival (OS) benefit from neoadjuvant chemoradiation (nCRT) compared to clinically node positive (cN+) patients. The aim of this study was to develop a calculator which could more precisely identify which patients derive OS benefit from nCRT. Methods Using the National Cancer Data Base (NCDB) from 2006–2012, patients with clinical stage T1b, N1–N3 or T2–T4a, N−/+, M0 adenocarcinoma of the middle or lower esophagus who underwent surgical resection were selected. The primary endpoint was overall survival (OS). Of this cohort, 80% was used to create the prediction model using Cox regression. The remaining 20% of the cohort was used to internally validate the model, the performance of which is measured using receiver operating characteristic (ROC) curve and the associated area under the curve (AUC). Results A total of 8,974 patients with were used to generate the model. This incorporated patient age, Charlson-Deyo comorbidity score, tumor grade, clinical T and N stage, and nCRT prior to surgery. Each of these variables was independently associated with OS on multivariable analysis. Factors associated with an increased risk of death included advanced age, higher grade, and higher T or N stage. Receipt of nCRT was associated with improved survival. Model performance showed an AUC of 0.644 and 0.689 for 1-year and 3-year OS, respectively. Conclusions A novel OS calculator incorporating patient and treatment factors was developed for esophageal adenocarcinoma, which accurately predicts which patient subsets are expected to derive OS benefit from nCRT. This tool can be very helpful in generating real-time estimates of OS benefit from nCRT in an individualized manner in order to assist with patient care decision making. PMID:28147252

  16. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  17. Accurate prediction of band gaps and optical properties of HfO2

    International Nuclear Information System (INIS)

    Ondračka, Pavel; Zajíčková, Lenka; Holec, David; Nečas, David

    2016-01-01

    We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran–Blaha modified Becke–Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe–Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO 2 was obtained only in the latter case. (paper)

  18. ROCK I Has More Accurate Prognostic Value than MET in Predicting Patient Survival in Colorectal Cancer.

    Science.gov (United States)

    Li, Jian; Bharadwaj, Shruthi S; Guzman, Grace; Vishnubhotla, Ramana; Glover, Sarah C

    2015-06-01

    Colorectal cancer remains the second leading cause of death in the United States despite improvements in incidence rates and advancements in screening. The present study evaluated the prognostic value of two tumor markers, MET and ROCK I, which have been noted in other cancers to provide more accurate prognoses of patient outcomes than tumor staging alone. We constructed a tissue microarray from surgical specimens of adenocarcinomas from 108 colorectal cancer patients. Using immunohistochemistry, we examined the expression levels of tumor markers MET and ROCK I, with a pathologist blinded to patient identities and clinical outcomes providing the scoring of MET and ROCK I expression. We then used retrospective analysis of patients' survival data to provide correlations with expression levels of MET and ROCK I. Both MET and ROCK I were significantly over-expressed in colorectal cancer tissues, relative to the unaffected adjacent mucosa. Kaplan-Meier survival analysis revealed that patients' 5-year survival was inversely correlated with levels of expression of ROCK I. In contrast, MET was less strongly correlated with five-year survival. ROCK I provides better efficacy in predicting patient outcomes, compared to either tumor staging or MET expression. As a result, ROCK I may provide a less invasive method of assessing patient prognoses and directing therapeutic interventions. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  19. Identification of core promoter modules in Drosophila and their application in accurate transcription start site prediction.

    Science.gov (United States)

    Ohler, Uwe

    2006-01-01

    The reliable recognition of eukaryotic RNA polymerase II core promoters, and the associated transcription start sites (TSSs) of genes, has been an ongoing challenge for computational biology. High throughput experimental methods such as tiling arrays or 5' SAGE/EST sequencing have recently lead to much larger datasets of core promoters, and to the assessment that the well-known core promoter sequence elements such as the TATA box appear to be much less frequent than thought. Here, we address the co-occurrence of several previously identified core promoter sequence motifs in Drosophila melanogaster to determine frequently occurring core promoter modules. We then use this in a new strategy to model core promoters as a set of alternative submodels for different core promoter architectures reflecting these different motif modules. We show that this system improves greatly on computational promoter recognition and leads to highly accurate in silico TSS prediction. Our results indicate that at least for the case of the fruit fly, we are getting closer to an understanding of how the beginning of a gene is defined in a eukaryotic genome.

  20. Cluster abundance in chameleon f ( R ) gravity I: toward an accurate halo mass function prediction

    Energy Technology Data Exchange (ETDEWEB)

    Cataneo, Matteo; Rapetti, David [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen (Denmark); Lombriser, Lucas [Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ (United Kingdom); Li, Baojiu, E-mail: matteoc@dark-cosmology.dk, E-mail: drapetti@dark-cosmology.dk, E-mail: llo@roe.ac.uk, E-mail: baojiu.li@durham.ac.uk [Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom)

    2016-12-01

    We refine the mass and environment dependent spherical collapse model of chameleon f ( R ) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N -body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f ( R ) halo abundance with respect to that of General Relativity (GR) within a precision of ∼< 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f ( R ) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.

  1. Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients

    International Nuclear Information System (INIS)

    Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne

    2010-01-01

    Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage ≤T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of ≤6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.

  2. Early serum creatinine accurately predicts acute kidney injury post cardiac surgery.

    Science.gov (United States)

    Grynberg, Keren; Polkinghorne, Kevan R; Ford, Sharon; Stenning, Fiona; Lew, Thomas E; Barrett, Jonathan A; Summers, Shaun A

    2017-03-16

    Acute Kidney Injury (AKI) is a well recognized complication of cardiac surgery. It is associated with significant morbidity and mortality. The aims of our study are twofold; 1. To define the incidence of AKI post cardiac surgery. 2. To identify pre-morbid and operative risk factors for developing AKI and to determine if immediate post operative serum creatinine (IPOsCr) accurately predicts the development of AKI. We prospectively studied 196 consecutive patients undergoing elective (on-pump) cardiac surgery. Baseline patient characteristics, including medical co-morbidities, proteinuria, procedural data and kidney function (serum creatinine (sCr) were collected. Internationally standardised criteria for AKI were used (sCr >1.5 times baseline, elevation in sCr >26.4 μmmol/L (0.3 mg/dl). Measurements were collected pre-operatively, within 2 h of surgical completion (IPOsCr) and daily for two days. Logistic regression was used to assess predictive factors for AKI including IPOsCr. Model discrimination was assessed using ROC AUC curves. Forty (20.4%) patients developed AKI postoperatively. Hypertension (OR 2.64, p = 0.02), diabetes (OR 2.25, p = 0.04), proteinuria (OR 2.48, p = 0.02) and a lower baseline eGFR (OR 0.74, p = 0.002) were associated with AKI in univariate analysis. A multivariate logistic model with preoperative and surgical factors (age, gender, eGFR, proteinuria, hypertension, diabetes and type of cardiac surgery) demonstrated moderate discrimination for AKI (ROC AUC 0.76). The addition of IPOsCr improved model discrimination for AKI (AUC 0.82, p = 0.07 versus baseline AUC) and was independently associated with AKI (OR 7.17; 95% CI 1.27-40.32; p = 0.025). One in 5 patients developed AKI post cardiac surgery. These patients have significantly increased morbidity and mortality. IPOsCr is significantly associated with the development of AKI, providing a cheap readily available prognostic marker.

  3. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  4. Towards Accurate Prediction of Unbalance Response, Oil Whirl and Oil Whip of Flexible Rotors Supported by Hydrodynamic Bearings

    Directory of Open Access Journals (Sweden)

    Rob Eling

    2016-09-01

    Full Text Available Journal bearings are used to support rotors in a wide range of applications. In order to ensure reliable operation, accurate analyses of these rotor-bearing systems are crucial. Coupled analysis of the rotor and the journal bearing is essential in the case that the rotor is flexible. The accuracy of prediction of the model at hand depends on its comprehensiveness. In this study, we construct three bearing models of increasing modeling comprehensiveness and use these to predict the response of two different rotor-bearing systems. The main goal is to evaluate the correlation with measurement data as a function of modeling comprehensiveness: 1D versus 2D pressure prediction, distributed versus lumped thermal model, Newtonian versus non-Newtonian fluid description and non-mass-conservative versus mass-conservative cavitation description. We conclude that all three models predict the existence of critical speeds and whirl for both rotor-bearing systems. However, the two more comprehensive models in general show better correlation with measurement data in terms of frequency and amplitude. Furthermore, we conclude that a thermal network model comprising temperature predictions of the bearing surroundings is essential to obtain accurate predictions. The results of this study aid in developing accurate and computationally-efficient models of flexible rotors supported by plain journal bearings.

  5. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  6. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding.

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  7. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Directory of Open Access Journals (Sweden)

    Yong-Bi Fu

    2017-07-01

    Full Text Available Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  8. Enabling Technology for Monitoring & Predicting Gas Turbine Health & Performance in COAL IGCC Powerplants

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth A. Yackly

    2004-09-30

    The ''Enabling & Information Technology To Increase RAM for Advanced Powerplants'' program, by DOE request, has been re-directed, de-scoped to two tasks, shortened to a 2-year period of performance, and refocused to develop, validate and accelerate the commercial use of enabling materials technologies and sensors for Coal IGCC powerplants. The new program has been re-titled as ''Enabling Technology for Monitoring & Predicting Gas Turbine Health & Performance in IGCC Powerplants'' to better match the new scope. This technical progress report summarizes the work accomplished in the reporting period April 1, 2004 to August 31, 2004 on the revised Re-Directed and De-Scoped program activity. The program Tasks are: Task 1--IGCC Environmental Impact on high Temperature Materials: This first materials task has been refocused to address Coal IGCC environmental impacts on high temperature materials use in gas turbines and remains in the program. This task will screen material performance and quantify the effects of high temperature erosion and corrosion of hot gas path materials in Coal IGCC applications. The materials of interest will include those in current service as well as advanced, high-performance alloys and coatings. Task 2--Material In-Service Health Monitoring: This second task develops and demonstrates new sensor technologies to determine the in-service health of advanced technology Coal IGCC powerplants, and remains in the program with a reduced scope. Its focus is now on only two critical sensor need areas for advanced Coal IGCC gas turbines: (1) Fuel Quality Sensor for detection of fuel impurities that could lead to rapid component degradation, and a Fuel Heating Value Sensor to rapidly determine the fuel heating value for more precise control of the gas turbine, and (2) Infra-Red Pyrometer to continuously measure the temperature of gas turbine buckets, nozzles, and combustor hardware.

  9. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  10. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    Directory of Open Access Journals (Sweden)

    Tatjana Braun

    2015-12-01

    Full Text Available Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  11. IPknot: fast and accurate prediction of RNA secondary structures with pseudoknots using integer programming.

    Science.gov (United States)

    Sato, Kengo; Kato, Yuki; Hamada, Michiaki; Akutsu, Tatsuya; Asai, Kiyoshi

    2011-07-01

    Pseudoknots found in secondary structures of a number of functional RNAs play various roles in biological processes. Recent methods for predicting RNA secondary structures cover certain classes of pseudoknotted structures, but only a few of them achieve satisfying predictions in terms of both speed and accuracy. We propose IPknot, a novel computational method for predicting RNA secondary structures with pseudoknots based on maximizing expected accuracy of a predicted structure. IPknot decomposes a pseudoknotted structure into a set of pseudoknot-free substructures and approximates a base-pairing probability distribution that considers pseudoknots, leading to the capability of modeling a wide class of pseudoknots and running quite fast. In addition, we propose a heuristic algorithm for refining base-paring probabilities to improve the prediction accuracy of IPknot. The problem of maximizing expected accuracy is solved by using integer programming with threshold cut. We also extend IPknot so that it can predict the consensus secondary structure with pseudoknots when a multiple sequence alignment is given. IPknot is validated through extensive experiments on various datasets, showing that IPknot achieves better prediction accuracy and faster running time as compared with several competitive prediction methods. The program of IPknot is available at http://www.ncrna.org/software/ipknot/. IPknot is also available as a web server at http://rna.naist.jp/ipknot/. satoken@k.u-tokyo.ac.jp; ykato@is.naist.jp Supplementary data are available at Bioinformatics online.

  12. The Cancer Cell Line Encyclopedia enables predictive modeling of anticancer drug sensitivity

    Science.gov (United States)

    Barretina, Jordi; Caponigro, Giordano; Stransky, Nicolas; Venkatesan, Kavitha; Margolin, Adam A.; Kim, Sungjoon; Wilson, Christopher J.; Lehár, Joseph; Kryukov, Gregory V.; Sonkin, Dmitriy; Reddy, Anupama; Liu, Manway; Murray, Lauren; Berger, Michael F.; Monahan, John E.; Morais, Paula; Meltzer, Jodi; Korejwa, Adam; Jané-Valbuena, Judit; Mapa, Felipa A.; Thibault, Joseph; Bric-Furlong, Eva; Raman, Pichai; Shipway, Aaron; Engels, Ingo H.; Cheng, Jill; Yu, Guoying K.; Yu, Jianjun; Aspesi, Peter; de Silva, Melanie; Jagtap, Kalpana; Jones, Michael D.; Wang, Li; Hatton, Charles; Palescandolo, Emanuele; Gupta, Supriya; Mahan, Scott; Sougnez, Carrie; Onofrio, Robert C.; Liefeld, Ted; MacConaill, Laura; Winckler, Wendy; Reich, Michael; Li, Nanxin; Mesirov, Jill P.; Gabriel, Stacey B.; Getz, Gad; Ardlie, Kristin; Chan, Vivien; Myer, Vic E.; Weber, Barbara L.; Porter, Jeff; Warmuth, Markus; Finan, Peter; Harris, Jennifer L.; Meyerson, Matthew; Golub, Todd R.; Morrissey, Michael P.; Sellers, William R.; Schlegel, Robert; Garraway, Levi A.

    2012-01-01

    The systematic translation of cancer genomic data into knowledge of tumor biology and therapeutic avenues remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacologic annotation is available1. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number, and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacologic profiles for 24 anticancer drugs across 479 of the lines, this collection allowed identification of genetic, lineage, and gene expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Altogether, our results suggest that large, annotated cell line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of “personalized” therapeutic regimens2. PMID:22460905

  13. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity.

    Science.gov (United States)

    Barretina, Jordi; Caponigro, Giordano; Stransky, Nicolas; Venkatesan, Kavitha; Margolin, Adam A; Kim, Sungjoon; Wilson, Christopher J; Lehár, Joseph; Kryukov, Gregory V; Sonkin, Dmitriy; Reddy, Anupama; Liu, Manway; Murray, Lauren; Berger, Michael F; Monahan, John E; Morais, Paula; Meltzer, Jodi; Korejwa, Adam; Jané-Valbuena, Judit; Mapa, Felipa A; Thibault, Joseph; Bric-Furlong, Eva; Raman, Pichai; Shipway, Aaron; Engels, Ingo H; Cheng, Jill; Yu, Guoying K; Yu, Jianjun; Aspesi, Peter; de Silva, Melanie; Jagtap, Kalpana; Jones, Michael D; Wang, Li; Hatton, Charles; Palescandolo, Emanuele; Gupta, Supriya; Mahan, Scott; Sougnez, Carrie; Onofrio, Robert C; Liefeld, Ted; MacConaill, Laura; Winckler, Wendy; Reich, Michael; Li, Nanxin; Mesirov, Jill P; Gabriel, Stacey B; Getz, Gad; Ardlie, Kristin; Chan, Vivien; Myer, Vic E; Weber, Barbara L; Porter, Jeff; Warmuth, Markus; Finan, Peter; Harris, Jennifer L; Meyerson, Matthew; Golub, Todd R; Morrissey, Michael P; Sellers, William R; Schlegel, Robert; Garraway, Levi A

    2012-03-28

    The systematic translation of cancer genomic data into knowledge of tumour biology and therapeutic possibilities remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacological annotation is available. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacological profiles for 24 anticancer drugs across 479 of the cell lines, this collection allowed identification of genetic, lineage, and gene-expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Together, our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of 'personalized' therapeutic regimens.

  14. Nurses and physicians in a medical admission unit can accurately predict mortality of acutely admitted patients

    DEFF Research Database (Denmark)

    Brabrand, Mikkel; Hallas, Jesper; Knudsen, Torben

    2014-01-01

    BACKGROUND: There exist several risk stratification systems for predicting mortality of emergency patients. However, some are complex in clinical use and others have been developed using suboptimal methodology. The objective was to evaluate the capability of the staff at a medical admission unit...... (MAU) to use clinical intuition to predict in-hospital mortality of acutely admitted patients. METHODS: This is an observational prospective cohort study of adult patients (15 years or older) admitted to a MAU at a regional teaching hospital. The nursing staff and physicians predicted in...... admitted. The nursing staff assessed 2,404 admissions and predicted mortality in 1,820 (63.9%). AUROC was 0.823 (95% CI: 0.762-0.884) and calibration poor. Physicians assessed 738 admissions and predicted mortality in 734 (25.8% of all admissions). AUROC was 0.761 (95% CI: 0.657-0.864) and calibration poor...

  15. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    Science.gov (United States)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  16. Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile

    National Research Council Canada - National Science Library

    Sahu, Jubaraj

    2005-01-01

    This report describes a new multi-disciplinary computational study undertaken to compute the flight trajectories and to simultaneously predict the unsteady free flight aerodynamics of a finned projectile configuration...

  17. Accurate microRNA target prediction correlates with protein repression levels

    Directory of Open Access Journals (Sweden)

    Simossis Victor A

    2009-09-01

    Full Text Available Abstract Background MicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease. Results DIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction. Conclusion Recently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at http://www.microrna.gr/microT

  18. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    Directory of Open Access Journals (Sweden)

    Jesse S. Jin

    2010-10-01

    Full Text Available Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  19. How accurate and statistically robust are catalytic site predictions based on closeness centrality?

    Directory of Open Access Journals (Sweden)

    Livesay Dennis R

    2007-05-01

    Full Text Available Abstract Background We examine the accuracy of enzyme catalytic residue predictions from a network representation of protein structure. In this model, amino acid α-carbons specify vertices within a graph and edges connect vertices that are proximal in structure. Closeness centrality, which has shown promise in previous investigations, is used to identify important positions within the network. Closeness centrality, a global measure of network centrality, is calculated as the reciprocal of the average distance between vertex i and all other vertices. Results We benchmark the approach against 283 structurally unique proteins within the Catalytic Site Atlas. Our results, which are inline with previous investigations of smaller datasets, indicate closeness centrality predictions are statistically significant. However, unlike previous approaches, we specifically focus on residues with the very best scores. Over the top five closeness centrality scores, we observe an average true to false positive rate ratio of 6.8 to 1. As demonstrated previously, adding a solvent accessibility filter significantly improves predictive power; the average ratio is increased to 15.3 to 1. We also demonstrate (for the first time that filtering the predictions by residue identity improves the results even more than accessibility filtering. Here, we simply eliminate residues with physiochemical properties unlikely to be compatible with catalytic requirements from consideration. Residue identity filtering improves the average true to false positive rate ratio to 26.3 to 1. Combining the two filters together has little affect on the results. Calculated p-values for the three prediction schemes range from 2.7E-9 to less than 8.8E-134. Finally, the sensitivity of the predictions to structure choice and slight perturbations is examined. Conclusion Our results resolutely confirm that closeness centrality is a viable prediction scheme whose predictions are statistically

  20. Enabling Technology for Monitoring & Predicting Gas Turbine Health & Performance in IGCC Powerplants

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth A. Yackly

    2005-12-01

    The ''Enabling & Information Technology To Increase RAM for Advanced Powerplants'' program, by DOE request, was re-directed, de-scoped to two tasks, shortened to a 2-year period of performance, and refocused to develop, validate and accelerate the commercial use of enabling materials technologies and sensors for coal/IGCC powerplants. The new program was re-titled ''Enabling Technology for Monitoring & Predicting Gas Turbine Health & Performance in IGCC Powerplants''. This final report summarizes the work accomplished from March 1, 2003 to March 31, 2004 on the four original tasks, and the work accomplished from April 1, 2004 to July 30, 2005 on the two re-directed tasks. The program Tasks are summarized below: Task 1--IGCC Environmental Impact on high Temperature Materials: The first task was refocused to address IGCC environmental impacts on high temperature materials used in gas turbines. This task screened material performance and quantified the effects of high temperature erosion and corrosion of hot gas path materials in coal/IGCC applications. The materials of interest included those in current service as well as advanced, high-performance alloys and coatings. Task 2--Material In-Service Health Monitoring: The second task was reduced in scope to demonstrate new technologies to determine the inservice health of advanced technology coal/IGCC powerplants. The task focused on two critical sensing needs for advanced coal/IGCC gas turbines: (1) Fuel Quality Sensor to rapidly determine the fuel heating value for more precise control of the gas turbine, and detection of fuel impurities that could lead to rapid component degradation. (2) Infra-Red Pyrometer to continuously measure the temperature of gas turbine buckets, nozzles, and combustor hardware. Task 3--Advanced Methods for Combustion Monitoring and Control: The third task was originally to develop and validate advanced monitoring and control methods for coal/IGCC gas

  1. A random protein-creatinine ratio accurately predicts baseline proteinuria in early pregnancy.

    Science.gov (United States)

    Hirshberg, Adi; Draper, Jennifer; Curley, Cara; Sammel, Mary D; Schwartz, Nadav

    2014-12-01

    Data surrounding the use of a random urine protein:creatinine ratio (PCR) in the diagnosis of preeclampsia is conflicting. We sought to determine whether PCR in early pregnancy can replace the 24-hour urine collection as the primary screening test in patients at risk for baseline proteinuria. Women requiring a baseline evaluation for proteinuria supplied a urine sample the morning after their 24-hour collection. The PCR was analyzed as a predictor of significant proteinuria (≥150 mg). A regression equation to estimate the 24-hour protein value from the PCR was then developed. Sixty of 135 subjects enrolled completed the study. The median 24-hour urine protein and PCR were 90 mg (IQR: 50-145) and 0.063 (IQR: 0.039-0.083), respectively. Fifteen patients (25%) had significant proteinuria. PCR was strongly correlated with the 24-hour protein value (r = 0.99, p protein = 46.5 + 904.2*PCR] accurately estimates the actual 24-hour protein (95% CI: ±88 mg). A random urine PCR accurately estimates the 24-hour protein excretion in the first half of pregnancy and can be used as the primary screening test for baseline proteinuria in at-risk patients.

  2. Recent Shift in Climate Relationship Enables Prediction of the Timing of Bird Breeding.

    Directory of Open Access Journals (Sweden)

    Shelley A Hinsley

    Full Text Available Large-scale climate processes influence many aspects of ecology including breeding phenology, reproductive success and survival across a wide range of taxa. Some effects are direct, for example, in temperate-zone birds, ambient temperature is an important cue enabling breeding effort to coincide with maximum food availability, and earlier breeding in response to warmer springs has been documented in many species. In other cases, time-lags of up to several years in ecological responses have been reported, with effects mediated through biotic mechanisms such as growth rates or abundance of food supplies. Here we use 23 years of data for a temperate woodland bird species, the great tit (Parus major, breeding in deciduous woodland in eastern England to demonstrate a time-lagged linear relationship between the on-set of egg laying and the winter index of the North Atlantic Oscillation such that timing can be predicted from the winter index for the previous year. Thus the timing of bird breeding (and, by inference, the timing of spring events in general can be predicted one year in advance. We also show that the relationship with the winter index appears to arise through an abiotic time-lag with local spring warmth in our study area. Examining this link between local conditions and larger-scale processes in the longer-term showed that, in the past, significant relationships with the immediately preceding winter index were more common than those with the time-lagged index, and especially so from the late 1930s to the early 1970s. However, from the mid 1970s onwards, the time-lagged relationship has become the most significant, suggesting a recent change in climate patterns. The strength of the current time-lagged relationship suggests that it might have relevance for other temperature-dependent ecological relationships.

  3. Achieving accurate and efficient prediction of HVAC diaphragm noise at realistic Reynolds and Mach numbers

    NARCIS (Netherlands)

    Guilloud, G.; Schram, C.; Golliard, J.

    2009-01-01

    Despite the aeroacoustic expertise reached nowadays in air and ground transportation, energy sector or domestic appliances, reaching a decibel accuracy of an acoustic prediction for industrial cases is still challenging. Strong investments are made nowadays by oil and gas companies to determine and

  4. Accurate prediction of the ammonia probes of a variable proton-to-electron mass ratio

    Science.gov (United States)

    Owens, A.; Yurchenko, S. N.; Thiel, W.; Špirko, V.

    2015-07-01

    A comprehensive study of the mass sensitivity of the vibration-rotation-inversion transitions of 14NH3, 15NH3, 14ND3 and 15ND3 is carried out variationally using the TROVE approach. Variational calculations are robust and accurate, offering a new way to compute sensitivity coefficients. Particular attention is paid to the Δk = ±3 transitions between the accidentally coinciding rotation-inversion energy levels of the ν2 = 0+, 0-, 1+ and 1- states, and the inversion transitions in the ν4 = 1 state affected by the `giant' l-type doubling effect. These transitions exhibit highly anomalous sensitivities, thus appearing as promising probes of a possible cosmological variation of the proton-to-electron mass ratio μ. Moreover, a simultaneous comparison of the calculated sensitivities reveals a sizeable isotopic dependence which could aid an exclusive ammonia detection.

  5. Safe surgery: how accurate are we at predicting intra-operative blood loss?

    LENUS (Irish Health Repository)

    2012-02-01

    Introduction Preoperative estimation of intra-operative blood loss by both anaesthetist and operating surgeon is a criterion of the World Health Organization\\'s surgical safety checklist. The checklist requires specific preoperative planning when anticipated blood loss is greater than 500 mL. The aim of this study was to assess the accuracy of surgeons and anaesthetists at predicting intra-operative blood loss. Methods A 6-week prospective study of intermediate and major operations in an academic medical centre was performed. An independent observer interviewed surgical and anaesthetic consultants and registrars, preoperatively asking each to predict expected blood loss in millilitre. Intra-operative blood loss was measured and compared with these predictions. Parameters including the use of anticoagulation and anti-platelet therapy as well as intra-operative hypothermia and hypotension were recorded. Results One hundred sixty-eight operations were included in the study, including 142 elective and 26 emergency operations. Blood loss was predicted to within 500 mL of measured blood loss in 89% of cases. Consultant surgeons tended to underestimate blood loss, doing so in 43% of all cases, while consultant anaesthetists were more likely to overestimate (60% of all operations). Twelve patients (7%) had underestimation of blood loss of more than 500 mL by both surgeon and anaesthetist. Thirty per cent (n = 6\\/20) of patients requiring transfusion of a blood product within 24 hours of surgery had blood loss underestimated by more than 500 mL by both surgeon and anaesthetist. There was no significant difference in prediction between patients on anti-platelet or anticoagulation therapy preoperatively and those not on the said therapies. Conclusion Predicted intra-operative blood loss was within 500 mL of measured blood loss in 89% of operations. In 30% of patients who ultimately receive a blood transfusion, both the surgeon and anaesthetist significantly underestimate

  6. A machine learning approach to the accurate prediction of monitor units for a compact proton machine.

    Science.gov (United States)

    Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu

    2018-03-03

    Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution

  7. Are predictive equations for estimating resting energy expenditure accurate in Asian Indian male weightlifters?

    Directory of Open Access Journals (Sweden)

    Mini Joseph

    2017-01-01

    Full Text Available Background: The accuracy of existing predictive equations to determine the resting energy expenditure (REE of professional weightlifters remains scarcely studied. Our study aimed at assessing the REE of male Asian Indian weightlifters with indirect calorimetry and to compare the measured REE (mREE with published equations. A new equation using potential anthropometric variables to predict REE was also evaluated. Materials and Methods: REE was measured on 30 male professional weightlifters aged between 17 and 28 years using indirect calorimetry and compared with the eight formulas predicted by Harris–Benedicts, Mifflin-St. Jeor, FAO/WHO/UNU, ICMR, Cunninghams, Owen, Katch-McArdle, and Nelson. Pearson correlation coefficient, intraclass correlation coefficient, and multiple linear regression analysis were carried out to study the agreement between the different methods, association with anthropometric variables, and to formulate a new prediction equation for this population. Results: Pearson correlation coefficients between mREE and the anthropometric variables showed positive significance with suprailiac skinfold thickness, lean body mass (LBM, waist circumference, hip circumference, bone mineral mass, and body mass. All eight predictive equations underestimated the REE of the weightlifters when compared with the mREE. The highest mean difference was 636 kcal/day (Owen, 1986 and the lowest difference was 375 kcal/day (Cunninghams, 1980. Multiple linear regression done stepwise showed that LBM was the only significant determinant of REE in this group of sportspersons. A new equation using LBM as the independent variable for calculating REE was computed. REE for weightlifters = −164.065 + 0.039 (LBM (confidence interval −1122.984, 794.854]. This new equation reduced the mean difference with mREE by 2.36 + 369.15 kcal/day (standard error = 67.40. Conclusion: The significant finding of this study was that all the prediction equations

  8. FATHMM-XF: accurate prediction of pathogenic point mutations via extended features.

    Science.gov (United States)

    Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2018-02-01

    We present FATHMM-XF, a method for predicting pathogenic point mutations in the human genome. Drawing on an extensive feature set, FATHMM-XF outperforms competitors on benchmark tests, particularly in non-coding regions where the majority of pathogenic mutations are likely to be found. The FATHMM-XF web server is available at http://fathmm.biocompute.org.uk/fathmm-xf/, and as tracks on the Genome Tolerance Browser: http://gtb.biocompute.org.uk. Predictions are provided for human genome version GRCh37/hg19. The data used for this project can be downloaded from: http://fathmm.biocompute.org.uk/fathmm-xf/. mark.rogers@bristol.ac.uk or c.campbell@bristol.ac.uk. Supplementary data are available at Bioinformatics online.

  9. Using an Allometric Equation to Accurately Predict the Energy Expenditure of Children and Adolescents With Nonalcoholic Fatty Liver Disease.

    Science.gov (United States)

    Martincevic, Inez; Mouzaki, Marialena

    2017-03-01

    Pediatric patients with nonalcoholic fatty liver disease (NAFLD) require targeted nutrition therapy that relies on calculating energy needs. Common energy equations are inaccurate in predicting resting energy expenditure (REE), influencing total energy expenditure (TEE) estimates. Equations based on allometric scaling are simple, accurate, void of subjective activity and/or stress factor bias, and they estimate TEE. To investigate the predictive accuracy of an allometric energy equation (AEE) in predicting TEE of children and adolescents with NAFLD. Retrospective study performed in a single institution. The allometric equation was used to calculate AEE, and the results were compared with TEE calculated using indirect calorimetry data (measured REE) multiplied by an activity factor (AF) of 1.5 or 1.7. Fifty-six patients with a mean age of 13 years were included in this study. The agreement between TEE (using an AF of 1.5) and AEE was -96 kcal/d (confidence interval, -29 to 221). The predictive accuracy of the allometric equation was not different between obese and nonobese patients. Allometric equations allow for accurate estimation of TEE in children with NAFLD.

  10. Ability to predict repetitions to momentary failure is not perfectly accurate, though improves with resistance training experience

    Directory of Open Access Journals (Sweden)

    James Steele

    2017-11-01

    Full Text Available ‘Repetitions in Reserve’ (RIR scales in resistance training (RT are used to control effort but assume people accurately predict performance a priori (i.e. the number of possible repetitions to momentary failure (MF. This study examined the ability of trainees with different experience levels to predict number of repetitions to MF. One hundred and forty-one participants underwent a full body RT session involving single sets to MF and were asked to predict the number of repetitions they could complete before reaching MF on each exercise. Participants underpredicted the number of repetitions they could perform to MF (Standard error of measurements [95% confidence intervals] for combined sample ranged between 2.64 [2.36–2.99] and 3.38 [3.02–3.83]. There was a tendency towards improved accuracy with greater experience. Ability to predict repetitions to MF is not perfectly accurate among most trainees though may improve with experience. Thus, RIR should be used cautiously in prescription of RT. Trainers and trainees should be aware of this as it may have implications for the attainment of training goals, particularly muscular hypertrophy.

  11. Ability to predict repetitions to momentary failure is not perfectly accurate, though improves with resistance training experience

    Science.gov (United States)

    Endres, Andreas; Fisher, James; Gentil, Paulo; Giessing, Jürgen

    2017-01-01

    ‘Repetitions in Reserve’ (RIR) scales in resistance training (RT) are used to control effort but assume people accurately predict performance a priori (i.e. the number of possible repetitions to momentary failure (MF)). This study examined the ability of trainees with different experience levels to predict number of repetitions to MF. One hundred and forty-one participants underwent a full body RT session involving single sets to MF and were asked to predict the number of repetitions they could complete before reaching MF on each exercise. Participants underpredicted the number of repetitions they could perform to MF (Standard error of measurements [95% confidence intervals] for combined sample ranged between 2.64 [2.36–2.99] and 3.38 [3.02–3.83]). There was a tendency towards improved accuracy with greater experience. Ability to predict repetitions to MF is not perfectly accurate among most trainees though may improve with experience. Thus, RIR should be used cautiously in prescription of RT. Trainers and trainees should be aware of this as it may have implications for the attainment of training goals, particularly muscular hypertrophy. PMID:29204323

  12. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Science.gov (United States)

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. © 2013 Wiley Periodicals, Inc.

  13. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    Directory of Open Access Journals (Sweden)

    Yasser El-Manzalawy

    Full Text Available A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles. Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein

  14. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    Energy Technology Data Exchange (ETDEWEB)

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  15. Accurate Structure Prediction of Peptide-MHC Complexes for Identifying Highly Immunogenic Antigens

    Science.gov (United States)

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-01-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide-MHC complex. Here, we present an in silico protocol for predicting peptide-MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally towards the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide-MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens. PMID:23688437

  16. Accurate cut-offs for predicting endoscopic activity and mucosal healing in Crohn's disease with fecal calprotectin

    Directory of Open Access Journals (Sweden)

    Juan María Vázquez-Morón

    Full Text Available Background: Fecal biomarkers, especially fecal calprotectin, are useful for predicting endoscopic activity in Crohn's disease; however, the cut-off point remains unclear. The aim of this paper was to analyze whether faecal calprotectin and M2 pyruvate kinase are good tools for generating highly accurate scores for the prediction of the state of endoscopic activity and mucosal healing. Methods: The simple endoscopic score for Crohn's disease and the Crohn's disease activity index was calculated for 71 patients diagnosed with Crohn's. Fecal calprotectin and M2-PK were measured by the enzyme-linked immunosorbent assay test. Results: A fecal calprotectin cut-off concentration of ≥ 170 µg/g (sensitivity 77.6%, specificity 95.5% and likelihood ratio +17.06 predicts a high probability of endoscopic activity, and a fecal calprotectin cut-off of ≤ 71 µg/g (sensitivity 95.9%, specificity 52.3% and likelihood ratio -0.08 predicts a high probability of mucosal healing. Three clinical groups were identified according to the data obtained: endoscopic activity (calprotectin ≥ 170, mucosal healing (calprotectin ≤ 71 and uncertainty (71 > calprotectin < 170, with significant differences in endoscopic values (F = 26.407, p < 0.01. Clinical activity or remission modified the probabilities of presenting endoscopic activity (100% vs 89% or mucosal healing (75% vs 87% in the diagnostic scores generated. M2-PK was insufficiently accurate to determine scores. Conclusions: The highly accurate scores for fecal calprotectin provide a useful tool for interpreting the probabilities of presenting endoscopic activity or mucosal healing, and are valuable in the specific clinical context.

  17. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons.

    Science.gov (United States)

    Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  18. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach.

    Directory of Open Access Journals (Sweden)

    Zhiheng Wang

    Full Text Available The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database.The DisoMCS is available at http://cal.tongji.edu.cn/disorder/.

  19. Heat capacities of xenotime-type ceramics: An accurate ab initio prediction

    Science.gov (United States)

    Ji, Yaqi; Beridze, George; Bosbach, Dirk; Kowalski, Piotr M.

    2017-10-01

    Because of ability to incorporate actinides into their structure, the lanthanide phosphate ceramics (LnPO4) are considered as potential matrices for the disposal of nuclear waste. Here we present highly reliable ab initio prediction of the variation of heat capacities and the standard entropies of these compounds in zircon structure along lanthanide series (Ln = Dy, …,Lu) and validate them against the existing experimental data. These data are helpful for assessment of thermodynamic parameters of these materials in the context of using them as matrices for immobilization of radionuclides for the purpose of nuclear waste management.

  20. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences

    KAUST Repository

    Chen, Peng

    2013-07-23

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.

  1. Volumetric analysis of central body fat accurately predicts incidence of diabetes and hypertension in adults.

    Science.gov (United States)

    Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steven T; Merrill, Ray M; Bikman, Benjamin T; Gibby, Wendell A

    2015-01-01

    Central adipose tissue is appreciated as a risk factor for cardiometabolic disorders. The purpose of this study was to determine the efficacy of a volumetric 3D analysis of central adipose tissue in predicting disease. Full body computerized tomography (CT) scans were obtained from 1225 female (518) and male (707) subjects, aged 18-88. Percent central body fat (%cBF) was determined by quantifying the adipose tissue volume from the dome of the liver to the pubic symphysis. Calcium score was determined from the calcium content of coronary arteries. Relationships between %cBF, BMI, and several cardiometabolic disorders were assessed controlling for age, sex, and race. Higher %cBF was significantly greater for those with type 2 diabetes and hypertension, but not stroke or hypercholesterolemia. Simple anthropometric determination of BMI equally correlated with diabetes and hypertension as central body fat. Calcium scoring significantly correlated with all measurements of cardiovascular health, including hypertension, hypercholesterolemia, and heart disease. Central body fat and BMI equally and highly predict incidence of hypertension and type 2 diabetes.

  2. Deep Learning Accurately Predicts Estrogen Receptor Status in Breast Cancer Metabolomics Data.

    Science.gov (United States)

    Alakwaa, Fadhl M; Chaudhary, Kumardeep; Garmire, Lana X

    2018-01-05

    Metabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+), and 67 negative estrogen receptor (ER-) to test the accuracies of feed-forward networks, a deep learning (DL) framework, as well as six widely used machine learning models, namely random forest (RF), support vector machines (SVM), recursive partitioning and regression trees (RPART), linear discriminant analysis (LDA), prediction analysis for microarrays (PAM), and generalized boosted models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER- patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value machine learning methods. Among them, protein digestion and absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accuracy (AUC = 0.93) and better revelation of disease biology. We encourage the adoption of feed-forward networks based deep learning method in the metabolomics research community for classification.

  3. Size matters. The width and location of a ureteral stone accurately predict the chance of spontaneous passage

    Energy Technology Data Exchange (ETDEWEB)

    Jendeberg, Johan; Geijer, Haakan; Alshamari, Muhammed; Liden, Mats [Oerebro University Hospital, Department of Radiology, Faculty of Medicine and Health, Oerebro (Sweden); Cierzniak, Bartosz [Oerebro University, Department of Surgery, Faculty of Medicine and Health, Oerebro (Sweden)

    2017-11-15

    To determine how to most accurately predict the chance of spontaneous passage of a ureteral stone using information in the diagnostic non-enhanced computed tomography (NECT) and to create predictive models with smaller stone size intervals than previously possible. Retrospectively 392 consecutive patients with ureteric stone on NECT were included. Three radiologists independently measured the stone size. Stone location, side, hydronephrosis, CRP, medical expulsion therapy (MET) and all follow-up radiology until stone expulsion or 26 weeks were recorded. Logistic regressions were performed with spontaneous stone passage in 4 weeks and 20 weeks as the dependent variable. The spontaneous passage rate in 20 weeks was 312 out of 392 stones, 98% in 0-2 mm, 98% in 3 mm, 81% in 4 mm, 65% in 5 mm, 33% in 6 mm and 9% in ≥6.5 mm wide stones. The stone size and location predicted spontaneous ureteric stone passage. The side and the grade of hydronephrosis only predicted stone passage in specific subgroups. Spontaneous passage of a ureteral stone can be predicted with high accuracy with the information available in the NECT. We present a prediction method based on stone size and location. (orig.)

  4. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  5. ABC/2 Method Does not Accurately Predict Cerebral Arteriovenous Malformation Volume.

    Science.gov (United States)

    Roark, Christopher; Vadlamudi, Venu; Chaudhary, Neeraj; Gemmete, Joseph J; Seinfeld, Joshua; Thompson, B Gregory; Pandey, Aditya S

    2018-02-01

    Stereotactic radiosurgery (SRS) is a treatment option for cerebral arteriovenous malformations (AVMs) to prevent intracranial hemorrhage. The decision to proceed with SRS is usually based on calculated nidal volume. Physicians commonly use the ABC/2 formula, based on digital subtraction angiography (DSA), when counseling patients for SRS. To determine whether AVM volume calculated using the ABC/2 method on DSA is accurate when compared to the exact volume calculated from thin-cut axial sections used for SRS planning. Retrospective search of neurovascular database to identify AVMs treated with SRS from 1995 to 2015. Maximum nidal diameters in orthogonal planes on DSA images were recorded to determine volume using ABC/2 formula. Nidal target volume was extracted from operative reports of SRS. Volumes were then compared using descriptive statistics and paired t-tests. Ninety intracranial AVMs were identified. Median volume was 4.96 cm3 [interquartile range (IQR) 1.79-8.85] with SRS planning methods and 6.07 cm3 (IQR 1.3-13.6) with ABC/2 methodology. Moderate correlation was seen between SRS and ABC/2 (r = 0.662; P ABC/2 (t = -3.2; P = .002). When AVMs were dichotomized based on ABC/2 volume, significant differences remained (t = 3.1, P = .003 for ABC/2 volume ABC/2 volume > 7 cm3). The ABC/2 method overestimates cerebral AVM volume when compared to volumetric analysis from SRS planning software. For AVMs > 7 cm3, the overestimation is even greater. SRS planning techniques were also significantly different than values derived from equations for cones and cylinders. Copyright © 2017 by the Congress of Neurological Surgeons

  6. The admixed population structure in Danish Jersey dairy cattle challenges accurate genomic predictions

    DEFF Research Database (Denmark)

    Thomasen, Jørn Rind; Sørensen, Anders Christian; Su, Guosheng

    2013-01-01

    The main purpose of this study is to evaluate whether the population structure in Danish Jersey known from the history of the breed also is reflected in the markers. This is done by comparing the linkage disequilibrium and persistence of phase for subgroups of Jersey animals with high proportions...... of Danish or US origin. Furthermore, it is investigated whether a model explicitly incorporating breed origin of animals, inferred either through the known pedigree or from SNP marker data, leads to improved genomic predictions compared to a model ignoring breed origin. The study of the population structure...... origin were analyzed and compared to a basic genomic model that assumes a homogeneous breed structure. The main finding in this study is that the importation of germ plasma from the US Jersey population is readily reflected in the genomes of modern Danish Jersey animals. Firstly, linkage disequilibrium...

  7. Accurate prediction of thermodynamic properties of alkyl peroxides by combining density functional theory calculation with least-square calibration.

    Science.gov (United States)

    Liu, Cun-Xi; Li, Ze-Rong; Zhou, Chong-Wen; Li, Xiang-Yuan

    2009-05-01

    Owing to the significance in kinetic modeling of the oxidation and combustion mechanisms of hydrocarbons, a fast and relatively accurate method was developed for the prediction of Delta(f)H(298)(o) of alkyl peroxides. By this method, a raw Delta(f)H(298)(o) value was calculated from the optimized geometry and vibration frequencies at B3LYP/6-31G(d,p) level and then an accurate Delta(f)H(298)(o) value was obtained by a least-square procedure. The least-square procedure is a six-parameter linear equation and is validated by a leave-one out technique, giving a cross-validation squared correlation coefficient q(2) of 0.97 and a squared correlation coefficient of 0.98 for the final model. Calculated results demonstrated that the least-square calibration leads to a remarkable reduction of error and to the accurate Delta(f)H(298)(o) values within the chemical accuracy of 8 kJ mol(-1) except (CH(3))(2)CHCH(2)CH(2)CH(2)OOH which has an error of 8.69 kJ mol(-1). Comparison of the results by CBS-Q, CBS-QB3, G2, and G3 revealed that B3LYP/6-31G(d,p) in combination with a least-square calibration is reliable in the accurate prediction of the standard enthalpies of formation for alkyl peroxides. Standard entropies at 298 K and heat capacities in the temperature range of 300-1500 K for alkyl peroxides were also calculated using the rigid rotor-harmonic oscillator approximation. 2008 Wiley Periodicals, Inc.

  8. Polarizable charge equilibration model for predicting accurate electrostatic interactions in molecules and solids

    Science.gov (United States)

    Naserifar, Saber; Brooks, Daniel J.; Goddard, William A.; Cvicek, Vaclav

    2017-03-01

    Electrostatic interactions play a critical role in determining the properties, structures, and dynamics of chemical, biochemical, and material systems. These interactions are described well at the level of quantum mechanics (QM) but not so well for the various models used in force field simulations of these systems. We propose and validate a new general methodology, denoted PQEq, to predict rapidly and dynamically the atomic charges and polarization underlying the electrostatic interactions. Here the polarization is described using an atomic sized Gaussian shaped electron density that can polarize away from the core in response to internal and external electric fields, while at the same time adjusting the charge on each core (described as a Gaussian function) so as to achieve a constant chemical potential across all atoms of the system. The parameters for PQEq are derived from experimental atomic properties of all elements up to Nobelium (atomic no. = 102). We validate PQEq by comparing to QM interaction energy as probe dipoles are brought along various directions up to 30 molecules containing H, C, N, O, F, Si, P, S, and Cl atoms. We find that PQEq predicts interaction energies in excellent agreement with QM, much better than other common charge models such as obtained from QM using Mulliken or ESP charges and those from standard force fields (OPLS and AMBER). Since PQEq increases the accuracy of electrostatic interactions and the response to external electric fields, we expect that PQEq will be useful for a large range of applications including ligand docking to proteins, catalytic reactions, electrocatalysis, ferroelectrics, and growth of ceramics and films, where it could be incorporated into standard force fields as OPLS, AMBER, CHARMM, Dreiding, ReaxFF, and UFF.

  9. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Science.gov (United States)

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.

  10. Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions

    Science.gov (United States)

    Lachut, M.; Bennett, J.

    2016-09-01

    The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.

  11. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  12. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries

  13. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  14. Easy-to-use, general, and accurate multi-Kinect calibration and its application to gait monitoring for fall prediction.

    Science.gov (United States)

    Staranowicz, Aaron N; Ray, Christopher; Mariottini, Gian-Luca

    2015-01-01

    Falls are the most-common causes of unintentional injury and death in older adults. Many clinics, hospitals, and health-care providers are urgently seeking accurate, low-cost, and easy-to-use technology to predict falls before they happen, e.g., by monitoring the human walking pattern (or "gait"). Despite the wide popularity of Microsoft's Kinect and the plethora of solutions for gait monitoring, no strategy has been proposed to date to allow non-expert users to calibrate the cameras, which is essential to accurately fuse the body motion observed by each camera in a single frame of reference. In this paper, we present a novel multi-Kinect calibration algorithm that has advanced features when compared to existing methods: 1) is easy to use, 2) it can be used in any generic Kinect arrangement, and 3) it provides accurate calibration. Extensive real-world experiments have been conducted to validate our algorithm and to compare its performance against other multi-Kinect calibration approaches, especially to show the improved estimate of gait parameters. Finally, a MATLAB Toolbox has been made publicly available for the entire research community.

  15. Lipid accumulation product: a simple and accurate index for predicting metabolic syndrome in Taiwanese people aged 50 and over

    Directory of Open Access Journals (Sweden)

    Chiang Jui-Kun

    2012-09-01

    Full Text Available Abstract Background Lipid accumulation product (LAP has been advocated as a simple clinical indicator of metabolic syndrome (MS. However, no studies have evaluated the accuracy of LAP in predicting MS in Taiwanese adults. The aim of our investigation was to use LAP to predict MS in Taiwanese adults. Methods Taiwanese adults aged 50 years and over (n = 513 were recruited from a physical examination center at a regional hospital in southern Taiwan. MS was defined according to the MS criteria for Taiwanese people. LAP was calculated as (waist circumference [cm] − 65 × (triglyceride concentration [mM] for men, and (waist circumference [cm] − 58 × (triglyceride concentration [mM] for women. Simple logistic regression and receiver-operating characteristic (ROC analyses were conducted. Results The prevalence of MS was 19.5 and 21.5% for males and females, respectively. LAP showed the highest prediction accuracy among adiposity measures with an area under the ROC curve (AUC of 0.901. This was significantly higher than the adiposity measure of waist-to-height ratio (AUC = 0.813. Conclusions LAP was a simple and accurate predictor of MS in Taiwanese people aged 50 years and over. LAP had significantly higher predictability than other adiposity measures tested.

  16. Haplotype Based Genome-Enabled Prediction of Traits Across Nordic Red Cattle Breeds

    DEFF Research Database (Denmark)

    Castro Dias Cuyabano, Beatriz; Lund, Mogens Sandø; Rosa, G J M

    SNP markers have been widely explored in genome based prediction. This study explored the use of haplotype blocks (haploblocks) to predict five milk production traits (fertility, mastitis, protein, fat and milk yield), using a mix of Nordic Red cattle as reference population for training. Predict......SNP markers have been widely explored in genome based prediction. This study explored the use of haplotype blocks (haploblocks) to predict five milk production traits (fertility, mastitis, protein, fat and milk yield), using a mix of Nordic Red cattle as reference population for training....... Predictions were performed under a Bayesian approach comparing a GBLUP and a mixture model. In general, predictions were more reliable when using haploblocks instead of individual SNPs as predictors. The Danish Red cattle presented the largest benefit in predictive ability from haploblocks, achieving 5.......1% higher reliability than with the individual SNP approach in mastitis. This work gives evidence that predictions using haploblocks along with a combined training population of dairy cattle, may improve prediction accuracy of important traits in the individual populations....

  17. Heparin removal by ecteola-cellulose pre-treatment enables the use of plasma samples for accurate measurement of anti-Yellow fever virus neutralizing antibodies.

    Science.gov (United States)

    Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis

    2017-09-01

    Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  19. Ratio of matrix metalloproteinase-2 to -9 is a more accurate predictive biomarker in women with suspected pre-eclampsia.

    Science.gov (United States)

    Feng, Hao; Wang, Li; Zhang, Min; Zhang, Zhiwei; Guo, Wei; Wang, Xietong

    2017-04-30

    Pre-eclampsia (PE) is a condition unique to pregnancy, and abnormal expression of matrix metalloproteinases (MMPs) has been implicated in its pathogenesis. We aimed to evaluate the reliability of plasma levels of MMP-2, MMP-9 and their relative ratio in predicting PE. A total of 318 women with suspected PE were recruited for the study, who were subsequently either cleared or diagnosed of PE and grouped accordingly. Their baseline characteristics were compared. Blood samples were also collected from all participants, to determine the plasma levels of MMP-2 and MMP-9. The predictive values of levels of MMP-2 and MMP-9, as well as their ratio, were analyzed using the receiver operating characteristic (ROC) curve. Either MMP-2 or MMP-9 alone did not exhibit any obvious differences between normal and PE pregnancies. However the ratio of MMP-2/MMP-9 was significantly higher in PE-affected pregnancy than normal control group. ROC curve analysis also indicated that the MMP-2/MMP-9 ratio provided better compromise between specificity and sensitivity in distinguishing PE from normal pregnancies, than either of the two MMPs alone. MMP-2/MMP-9 ratio is a more accurate biomarker to predict PE than either MMP-2 or MMP-9 alone. © 2017 The Author(s).

  20. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  1. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    Directory of Open Access Journals (Sweden)

    Jaroslav Bendl

    2016-05-01

    Full Text Available An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i regulatory, (ii splicing, (iii missense, (iv synonymous, and (v nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of

  2. Effect of computational grid on accurate prediction of a wind turbine rotor using delayed detached-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)

    2017-05-15

    The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.

  3. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    Energy Technology Data Exchange (ETDEWEB)

    Bok, H.-H.; Kim, S.N.; Suh, D.W. [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Barlat, F., E-mail: f.barlat@postech.ac.kr [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Lee, M.-G., E-mail: myounglee@korea.ac.kr [Department of Materials Science and Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul (Korea, Republic of)

    2015-02-25

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments.

  4. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    International Nuclear Information System (INIS)

    Bok, H.-H.; Kim, S.N.; Suh, D.W.; Barlat, F.; Lee, M.-G.

    2015-01-01

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments

  5. Accurate prediction of acute fish toxicity of fragrance chemicals with the RTgill-W1 cell assay.

    Science.gov (United States)

    Natsch, Andreas; Laue, Heike; Haupt, Tina; von Niederhäusern, Valentin; Sanders, Gordon

    2018-03-01

    Testing for acute fish toxicity is an integral part of the environmental safety assessment of chemicals. A true replacement of primary fish tissue was recently proposed using cell viability in a fish gill cell line (RTgill-W1) as a means of predicting acute toxicity, showing good predictivity on 35 chemicals. To promote regulatory acceptance, the predictivity and applicability domain of novel tests need to be carefully evaluated on chemicals with existing high-quality in vivo data. We applied the RTgill-W1 cell assay to 38 fragrance chemicals with a wide range of both physicochemical properties and median lethal concentration (LC50) values and representing a diverse range of chemistries. A strong correlation (R 2  = 0.90-0.94) between the logarithmic in vivo LC50 values, based on fish mortality, and the logarithmic in vitro median effect concentration (EC50) values based on cell viability was observed. A leave-one-out analysis illustrates a median under-/overprediction from in vitro EC50 values to in vivo LC50 values by a factor of 1.5. This assay offers a simple, accurate, and reliable alternative to in vivo acute fish toxicity testing for chemicals, presumably acting mainly by a narcotic mode of action. Furthermore, the present study provides validation of the predictivity of the RTgill-W1 assay on a completely independent set of chemicals that had not been previously tested and indicates that fragrance chemicals are clearly within the applicability domain. Environ Toxicol Chem 2018;37:931-941. © 2017 SETAC. © 2017 SETAC.

  6. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service

    Directory of Open Access Journals (Sweden)

    Han Zou

    2016-02-01

    Full Text Available The location and contextual status (indoor or outdoor is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS for individuals. In addition, optimizations of building management systems (BMS, such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.

  7. BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.

    Science.gov (United States)

    Zou, Han; Jiang, Hao; Luo, Yiwen; Zhu, Jianjie; Lu, Xiaoxuan; Xie, Lihua

    2016-02-22

    The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption.

  8. Genome-enabled methods for predicting litter size in pigs: a comparison.

    Science.gov (United States)

    Tusell, L; Pérez-Rodríguez, P; Forni, S; Wu, X-L; Gianola, D

    2013-11-01

    Predictive ability of models for litter size in swine on the basis of different sources of genetic information was investigated. Data represented average litter size on 2598, 1604 and 1897 60K genotyped sows from two purebred and one crossbred line, respectively. The average correlation (r) between observed and predicted phenotypes in a 10-fold cross-validation was used to assess predictive ability. Models were: pedigree-based mixed-effects model (PED), Bayesian ridge regression (BRR), Bayesian LASSO (BL), genomic BLUP (GBLUP), reproducing kernel Hilbert spaces regression (RKHS), Bayesian regularized neural networks (BRNN) and radial basis function neural networks (RBFNN). BRR and BL used the marker matrix or its principal component scores matrix (UD) as covariates; RKHS employed a Gaussian kernel with additive codes for markers whereas neural networks employed the additive genomic relationship matrix (G) or UD as inputs. The non-parametric models (RKHS, BRNN, RNFNN) gave similar predictions to the parametric counterparts (average r ranged from 0.15 to 0.23); most of the genome-based models outperformed PED (r = 0.16). Predictive abilities of linear models and RKHS were similar over lines, but BRNN varied markedly, giving the best prediction (r = 0.31) when G was used in crossbreds, but the worst (r = 0.02) when the G matrix was used in one of the purebred lines. The r values for RBFNN ranged from 0.16 to 0.23. Predictive ability was better in crossbreds (0.26) than in purebreds (0.15 to 0.22). This may be related to family structure in the purebred lines.

  9. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-08-26

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  10. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    International Nuclear Information System (INIS)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David

    2017-01-01

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  11. Mathematical models for accurate prediction of atmospheric visibility with particular reference to the seasonal and environmental patterns in Hong Kong.

    Science.gov (United States)

    Mui, K W; Wong, L T; Chung, L Y

    2009-11-01

    Atmospheric visibility impairment has gained increasing concern as it is associated with the existence of a number of aerosols as well as common air pollutants and produces unfavorable conditions for observation, dispersion, and transportation. This study analyzed the atmospheric visibility data measured in urban and suburban Hong Kong (two selected stations) with respect to time-matched mass concentrations of common air pollutants including nitrogen dioxide (NO(2)), nitrogen monoxide (NO), respirable suspended particulates (PM(10)), sulfur dioxide (SO(2)), carbon monoxide (CO), and meteorological parameters including air temperature, relative humidity, and wind speed. No significant difference in atmospheric visibility was reported between the two measurement locations (p > or = 0.6, t test); and good atmospheric visibility was observed more frequently in summer and autumn than in winter and spring (p atmospheric visibility increased with temperature but decreased with the concentrations of SO(2), CO, PM(10), NO, and NO(2). The results showed that atmospheric visibility was season dependent and would have significant correlations with temperature, the mass concentrations of PM(10) and NO(2), and the air pollution index API (correlation coefficients mid R: R mid R: > or = 0.7, p atmospheric visibility were thus proposed. By comparison, the proposed visibility prediction models were more accurate than some existing regional models. In addition to improving visibility prediction accuracy, this study would be useful for understanding the context of low atmospheric visibility, exploring possible remedial measures, and evaluating the impact of air pollution and atmospheric visibility impairment in this region.

  12. Heteroscedastic ridge regression approaches for genome-wide prediction with a focus on computational efficiency and accurate effect estimation.

    Science.gov (United States)

    Hofheinz, Nina; Frisch, Matthias

    2014-03-20

    Ridge regression with heteroscedastic marker variances provides an alternative to Bayesian genome-wide prediction methods. Our objectives were to suggest new methods to determine marker-specific shrinkage factors for heteroscedastic ridge regression and to investigate their properties with respect to computational efficiency and accuracy of estimated effects. We analyzed published data sets of maize, wheat, and sugar beet as well as simulated data with the new methods. Ridge regression with shrinkage factors that were proportional to single-marker analysis of variance estimates of variance components (i.e., RRWA) was the fastest method. It required computation times of less than 1 sec for medium-sized data sets, which have dimensions that are common in plant breeding. A modification of the expectation-maximization algorithm that yields heteroscedastic marker variances (i.e., RMLV) resulted in the most accurate marker effect estimates. It outperformed the homoscedastic ridge regression approach for best linear unbiased prediction in particular for situations with high marker density and strong linkage disequilibrium along the chromosomes, a situation that occurs often in plant breeding populations. We conclude that the RRWA and RMLV approaches provide alternatives to the commonly used Bayesian methods, in particular for applications in which computational feasibility or accuracy of effect estimates are important, such as detection or functional analysis of genes or planning crosses.

  13. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  14. Respiratory variation in peak aortic velocity accurately predicts fluid responsiveness in children undergoing neurosurgery under general anesthesia.

    Science.gov (United States)

    Morparia, Kavita G; Reddy, Srijaya K; Olivieri, Laura J; Spaeder, Michael C; Schuette, Jennifer J

    2018-04-01

    The determination of fluid responsiveness in the critically ill child is of vital importance, more so as fluid overload becomes increasingly associated with worse outcomes. Dynamic markers of volume responsiveness have shown some promise in the pediatric population, but more research is needed before they can be adopted for widespread use. Our aim was to investigate effectiveness of respiratory variation in peak aortic velocity and pulse pressure variation to predict fluid responsiveness, and determine their optimal cutoff values. We performed a prospective, observational study at a single tertiary care pediatric center. Twenty-one children with normal cardiorespiratory status undergoing general anesthesia for neurosurgery were enrolled. Respiratory variation in peak aortic velocity (ΔVpeak ao) was measured both before and after volume expansion using a bedside ultrasound device. Pulse pressure variation (PPV) value was obtained from the bedside monitor. All patients received a 10 ml/kg fluid bolus as volume expansion, and were qualified as responders if stroke volume increased >15% as a result. Utility of ΔVpeak ao and PPV and to predict responsiveness to volume expansion was investigated. A baseline ΔVpeak ao value of greater than or equal to 12.3% best predicted a positive response to volume expansion, with a sensitivity of 77%, specificity of 89% and area under receiver operating characteristic curve of 0.90. PPV failed to demonstrate utility in this patient population. Respiratory variation in peak aortic velocity is a promising marker for optimization of perioperative fluid therapy in the pediatric population and can be accurately measured using bedside ultrasonography. More research is needed to evaluate the lack of effectiveness of pulse pressure variation for this purpose.

  15. Genome-enabled prediction models for yield related traits in chickpea

    Science.gov (United States)

    Genomic selection (GS) unlike marker-assisted backcrossing (MABC) predicts breeding values of lines using genome-wide marker profiling and allows selection of lines prior to field-phenotyping, thereby shortening the breeding cycle. A collection of 320 elite breeding lines was selected and phenotyped...

  16. QSAR enabled predictions in water treatment: from data to mechanisms and vice-versa

    NARCIS (Netherlands)

    Vries, D.; Wols, B.A.; de Voogt, P.

    2012-01-01

    The efficiency of water treatment systems to remove emerging (chemical) substances is often unknown. Consequently, the prediction of the removal of contaminants in the treatment and supply chain of drinking water is of great interest. By collecting and processing existing chemical properties of

  17. Removal efficiency calculated beforehand: QSAR enabled predictions for nanofiltration and advanced oxidation

    NARCIS (Netherlands)

    Vries, D; Wols, B.A.; de Voogt, P.

    2013-01-01

    The efficiency of water treatment systems in removing emerging (chemical) substances is often unknown. Consequently, the prediction of the removal of contaminants in the treatment and supply chain of drinking water is of great interest. By collecting and processing existing chemical properties of

  18. GTfold: Enabling parallel RNA secondary structure prediction on multi-core desktops

    DEFF Research Database (Denmark)

    Swenson, M Shel; Anderson, Joshua; Ash, Andrew

    2012-01-01

    achieved significant improvements in runtime, but their implementations were not portable from niche high-performance computers or easily accessible to most RNA researchers. With the increasing prevalence of multi-core desktop machines, a new parallel prediction program is needed to take full advantage...

  19. A novel prognostic nomogram accurately predicts hepatocellular carcinoma recurrence after liver transplantation: analysis of 865 consecutive liver transplant recipients.

    Science.gov (United States)

    Agopian, Vatche G; Harlander-Locke, Michael; Zarrinpar, Ali; Kaldas, Fady M; Farmer, Douglas G; Yersiz, Hasan; Finn, Richard S; Tong, Myron; Hiatt, Jonathan R; Busuttil, Ronald W

    2015-04-01

    radiographic size criteria significantly improves the ability to predict post-transplant recurrence, and should be considered in recipient selection. A novel clinicopathologic prognostic nomogram accurately predicts HCC recurrence after LT and may guide frequency of post-transplantation surveillance and adjuvant therapy. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Combining information of autonomic modulation and CGM measurements enables prediction and improves detection of spontaneous hypoglycemic events

    DEFF Research Database (Denmark)

    Cichosz, Simon Lebech; Frystyk, Jan; Tarnow, Lise

    2015-01-01

    We have previously tested, in a laboratory setting, a novel algorithm that enables prediction of hypoglycemia. The algorithm integrates information of autonomic modulation, based on heart rate variability (HRV), and data based on a continuous glucose monitoring (CGM) device. Now, we investigate...... whether the algorithm is suitable for prediction of hypoglycemia and for improvement of hypoglycemic detection during normal daily activities. Twenty-one adults (13 men) with T1D prone to hypoglycemia were recruited and monitored with CGM and a Holter device while they performed normal daily activities....... We used our developed algorithm (a pattern classification method) to predict spontaneous hypoglycemia based on CGM and HRV. We compared 3 different models; (i) a model containing raw data from the CGM device; (ii) a CGM* model containing data derived from the CGM device signal; and (iii) a CGM...

  1. Repurposing High-Throughput Image Assays Enables Biological Activity Prediction for Drug Discovery.

    Science.gov (United States)

    Simm, Jaak; Klambauer, Günter; Arany, Adam; Steijaert, Marvin; Wegner, Jörg Kurt; Gustin, Emmanuel; Chupakhin, Vladimir; Chong, Yolanda T; Vialard, Jorge; Buijnsters, Peter; Velter, Ingrid; Vapirev, Alexander; Singh, Shantanu; Carpenter, Anne E; Wuyts, Roel; Hochreiter, Sepp; Moreau, Yves; Ceulemans, Hugo

    2018-02-16

    In both academia and the pharmaceutical industry, large-scale assays for drug discovery are expensive and often impractical, particularly for the increasingly important physiologically relevant model systems that require primary cells, organoids, whole organisms, or expensive or rare reagents. We hypothesized that data from a single high-throughput imaging assay can be repurposed to predict the biological activity of compounds in other assays, even those targeting alternate pathways or biological processes. Indeed, quantitative information extracted from a three-channel microscopy-based screen for glucocorticoid receptor translocation was able to predict assay-specific biological activity in two ongoing drug discovery projects. In these projects, repurposing increased hit rates by 50- to 250-fold over that of the initial project assays while increasing the chemical structure diversity of the hits. Our results suggest that data from high-content screens are a rich source of information that can be used to predict and replace customized biological assays. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Genome-Enabled Prediction of Breeding Values for Feedlot Average Daily Weight Gain in Nelore Cattle

    Directory of Open Access Journals (Sweden)

    Adriana L. Somavilla

    2017-06-01

    Full Text Available Nelore is the most economically important cattle breed in Brazil, and the use of genetically improved animals has contributed to increased beef production efficiency. The Brazilian beef feedlot industry has grown considerably in the last decade, so the selection of animals with higher growth rates on feedlot has become quite important. Genomic selection (GS could be used to reduce generation intervals and improve the rate of genetic gains. The aim of this study was to evaluate the prediction of genomic-estimated breeding values (GEBV for average daily weight gain (ADG in 718 feedlot-finished Nelore steers. Analyses of three Bayesian model specifications [Bayesian GBLUP (BGBLUP, BayesA, and BayesCπ] were performed with four genotype panels [Illumina BovineHD BeadChip, TagSNPs, and GeneSeek High- and Low-density indicus (HDi and LDi, respectively]. Estimates of Pearson correlations, regression coefficients, and mean squared errors were used to assess accuracy and bias of predictions. Overall, the BayesCπ model resulted in less biased predictions. Accuracies ranged from 0.18 to 0.27, which are reasonable values given the heritability estimates (from 0.40 to 0.44 and sample size (568 animals in the training population. Furthermore, results from Bos taurus indicus panels were as informative as those from Illumina BovineHD, indicating that they could be used to implement GS at lower costs.

  3. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    Science.gov (United States)

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  4. Homogeneous datasets of triple negative breast cancers enable the identification of novel prognostic and predictive signatures.

    Directory of Open Access Journals (Sweden)

    Thomas Karn

    Full Text Available BACKGROUND: Current prognostic gene signatures for breast cancer mainly reflect proliferation status and have limited value in triple-negative (TNBC cancers. The identification of prognostic signatures from TNBC cohorts was limited in the past due to small sample sizes. METHODOLOGY/PRINCIPAL FINDINGS: We assembled all currently publically available TNBC gene expression datasets generated on Affymetrix gene chips. Inter-laboratory variation was minimized by filtering methods for both samples and genes. Supervised analysis was performed to identify prognostic signatures from 394 cases which were subsequently tested on an independent validation cohort (n = 261 cases. CONCLUSIONS/SIGNIFICANCE: Using two distinct false discovery rate thresholds, 25% and <3.5%, a larger (n = 264 probesets and a smaller (n = 26 probesets prognostic gene sets were identified and used as prognostic predictors. Most of these genes were positively associated with poor prognosis and correlated to metagenes for inflammation and angiogenesis. No correlation to other previously published prognostic signatures (recurrence score, genomic grade index, 70-gene signature, wound response signature, 7-gene immune response module, stroma derived prognostic predictor, and a medullary like signature was observed. In multivariate analyses in the validation cohort the two signatures showed hazard ratios of 4.03 (95% confidence interval [CI] 1.71-9.48; P = 0.001 and 4.08 (95% CI 1.79-9.28; P = 0.001, respectively. The 10-year event-free survival was 70% for the good risk and 20% for the high risk group. The 26-gene signatures had modest predictive value (AUC = 0.588 to predict response to neoadjuvant chemotherapy, however, the combination of a B-cell metagene with the prognostic signatures increased its response predictive value. We identified a 264-gene prognostic signature for TNBC which is unrelated to previously known prognostic signatures.

  5. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Osval A. Montesinos-López

    2017-06-01

    Full Text Available There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments.

  6. Accurate prediction of immunogenic T-cell epitopes from epitope sequences using the genetic algorithm-based ensemble learning.

    Science.gov (United States)

    Zhang, Wen; Niu, Yanqing; Zou, Hua; Luo, Longqiang; Liu, Qianchao; Wu, Weijian

    2015-01-01

    T-cell epitopes play the important role in T-cell immune response, and they are critical components in the epitope-based vaccine design. Immunogenicity is the ability to trigger an immune response. The accurate prediction of immunogenic T-cell epitopes is significant for designing useful vaccines and understanding the immune system. In this paper, we attempt to differentiate immunogenic epitopes from non-immunogenic epitopes based on their primary structures. First of all, we explore a variety of sequence-derived features, and analyze their relationship with epitope immunogenicity. To effectively utilize various features, a genetic algorithm (GA)-based ensemble method is proposed to determine the optimal feature subset and develop the high-accuracy ensemble model. In the GA optimization, a chromosome is to represent a feature subset in the search space. For each feature subset, the selected features are utilized to construct the base predictors, and an ensemble model is developed by taking the average of outputs from base predictors. The objective of GA is to search for the optimal feature subset, which leads to the ensemble model with the best cross validation AUC (area under ROC curve) on the training set. Two datasets named 'IMMA2' and 'PAAQD' are adopted as the benchmark datasets. Compared with the state-of-the-art methods POPI, POPISK, PAAQD and our previous method, the GA-based ensemble method produces much better performances, achieving the AUC score of 0.846 on IMMA2 dataset and the AUC score of 0.829 on PAAQD dataset. The statistical analysis demonstrates the performance improvements of GA-based ensemble method are statistically significant. The proposed method is a promising tool for predicting the immunogenic epitopes. The source codes and datasets are available in S1 File.

  7. Accurate prediction of immunogenic T-cell epitopes from epitope sequences using the genetic algorithm-based ensemble learning.

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    Full Text Available T-cell epitopes play the important role in T-cell immune response, and they are critical components in the epitope-based vaccine design. Immunogenicity is the ability to trigger an immune response. The accurate prediction of immunogenic T-cell epitopes is significant for designing useful vaccines and understanding the immune system.In this paper, we attempt to differentiate immunogenic epitopes from non-immunogenic epitopes based on their primary structures. First of all, we explore a variety of sequence-derived features, and analyze their relationship with epitope immunogenicity. To effectively utilize various features, a genetic algorithm (GA-based ensemble method is proposed to determine the optimal feature subset and develop the high-accuracy ensemble model. In the GA optimization, a chromosome is to represent a feature subset in the search space. For each feature subset, the selected features are utilized to construct the base predictors, and an ensemble model is developed by taking the average of outputs from base predictors. The objective of GA is to search for the optimal feature subset, which leads to the ensemble model with the best cross validation AUC (area under ROC curve on the training set.Two datasets named 'IMMA2' and 'PAAQD' are adopted as the benchmark datasets. Compared with the state-of-the-art methods POPI, POPISK, PAAQD and our previous method, the GA-based ensemble method produces much better performances, achieving the AUC score of 0.846 on IMMA2 dataset and the AUC score of 0.829 on PAAQD dataset. The statistical analysis demonstrates the performance improvements of GA-based ensemble method are statistically significant.The proposed method is a promising tool for predicting the immunogenic epitopes. The source codes and datasets are available in S1 File.

  8. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  9. A rapid and accurate approach for prediction of interactomes from co-elution data (PrInCE).

    Science.gov (United States)

    Stacey, R Greg; Skinnider, Michael A; Scott, Nichollas E; Foster, Leonard J

    2017-10-23

    An organism's protein interactome, or complete network of protein-protein interactions, defines the protein complexes that drive cellular processes. Techniques for studying protein complexes have traditionally applied targeted strategies such as yeast two-hybrid or affinity purification-mass spectrometry to assess protein interactions. However, given the vast number of protein complexes, more scalable methods are necessary to accelerate interaction discovery and to construct whole interactomes. We recently developed a complementary technique based on the use of protein correlation profiling (PCP) and stable isotope labeling in amino acids in cell culture (SILAC) to assess chromatographic co-elution as evidence of interacting proteins. Importantly, PCP-SILAC is also capable of measuring protein interactions simultaneously under multiple biological conditions, allowing the detection of treatment-specific changes to an interactome. Given the uniqueness and high dimensionality of co-elution data, new tools are needed to compare protein elution profiles, control false discovery rates, and construct an accurate interactome. Here we describe a freely available bioinformatics pipeline, PrInCE, for the analysis of co-elution data. PrInCE is a modular, open-source library that is computationally inexpensive, able to use label and label-free data, and capable of detecting tens of thousands of protein-protein interactions. Using a machine learning approach, PrInCE offers greatly reduced run time, more predicted interactions at the same stringency, prediction of protein complexes, and greater ease of use over previous bioinformatics tools for co-elution data. PrInCE is implemented in Matlab (version R2017a). Source code and standalone executable programs for Windows and Mac OSX are available at https://github.com/fosterlab/PrInCE , where usage instructions can be found. An example dataset and output are also provided for testing purposes. PrInCE is the first fast and easy

  10. Genomic-Enabled Prediction Based on Molecular Markers and Pedigree Using the Bayesian Linear Regression Package in R

    Directory of Open Access Journals (Sweden)

    Paulino Pérez

    2010-09-01

    Full Text Available The availability of dense molecular markers has made possible the use of genomic selection in plant and animal breeding. However, models for genomic selection pose several computational and statistical challenges and require specialized computer programs, not always available to the end user and not implemented in standard statistical software yet. The R-package BLR (Bayesian Linear Regression implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO in a unified framework that allows including marker genotypes and pedigree data jointly. This article describes the classes of models implemented in the BLR package and illustrates their use through examples. Some challenges faced when applying genomic-enabled selection, such as model choice, evaluation of predictive ability through cross-validation, and choice of hyper-parameters, are also addressed.

  11. Genomic-Enabled Prediction Based on Molecular Markers and Pedigree Using the Bayesian Linear Regression Package in R.

    Science.gov (United States)

    Pérez, Paulino; de Los Campos, Gustavo; Crossa, José; Gianola, Daniel

    2010-01-01

    The availability of dense molecular markers has made possible the use of genomic selection in plant and animal breeding. However, models for genomic selection pose several computational and statistical challenges and require specialized computer programs, not always available to the end user and not implemented in standard statistical software yet. The R-package BLR (Bayesian Linear Regression) implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO) in a unifi ed framework that allows including marker genotypes and pedigree data jointly. This article describes the classes of models implemented in the BLR package and illustrates their use through examples. Some challenges faced when applying genomic-enabled selection, such as model choice, evaluation of predictive ability through cross-validation, and choice of hyper-parameters, are also addressed.

  12. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    Science.gov (United States)

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  13. Towards accurate prediction of unbalance response, oil whirl and oil whip of flexible rotors supported by hydrodynamic bearings

    NARCIS (Netherlands)

    Eling, R.P.T.; te Wierik, M.; van Ostayen, R.A.J.; Rixen, D.J.

    2016-01-01

    Journal bearings are used to support rotors in a wide range of applications. In order to ensure reliable operation, accurate analyses of these rotor-bearing systems are crucial. Coupled analysis of the rotor and the journal bearing is essential in the case that the rotor is flexible. The accuracy of

  14. Mini-Mental Status Examination: a short form of MMSE was as accurate as the original MMSE in predicting dementia

    DEFF Research Database (Denmark)

    Schultz-Larsen, Kirsten; Lomholt, Rikke Kirstine; Kreiner, Svend

    2006-01-01

    OBJECTIVES: This study assesses the properties of the Mini-Mental State Examination (MMSE) with the purpose of improving the efficiencies of the methods of screening for cognitive impairment and dementia. A specific purpose was to determine whether an abbreviated version would be as accurate...

  15. Total reference air kerma can accurately predict isodose surface volumes in cervix cancer brachytherapy. A multicenter study

    DEFF Research Database (Denmark)

    Nkiwane, Karen S; Andersen, Else; Champoudry, Jerome

    2017-01-01

    PURPOSE: To demonstrate that V60 Gy, V75 Gy, and V85 Gy isodose surface volumes can be accurately estimated from total reference air kerma (TRAK) in cervix cancer MRI-guided brachytherapy (BT). METHODS AND MATERIALS: 60 Gy, 75 Gy, and 85 Gy isodose surface volumes levels were obtained from...

  16. Cross-species mapping of bidirectional promoters enables prediction of unannotated 5' UTRs and identification of species-specific transcripts

    Directory of Open Access Journals (Sweden)

    Lewin Harris A

    2009-04-01

    Full Text Available Abstract Background Bidirectional promoters are shared regulatory regions that influence the expression of two oppositely oriented genes. This type of regulatory architecture is found more frequently than expected by chance in the human genome, yet many specifics underlying the regulatory design are unknown. Given that the function of most orthologous genes is similar across species, we hypothesized that the architecture and regulation of bidirectional promoters might also be similar across species, representing a core regulatory structure and enabling annotation of these regions in additional mammalian genomes. Results By mapping the intergenic distances of genes in human, chimpanzee, bovine, murine, and rat, we show an enrichment for pairs of genes equal to or less than 1,000 bp between their adjacent 5' ends ("head-to-head" compared to pairs of genes that fall in the same orientation ("head-to-tail" or whose 3' ends are side-by-side ("tail-to-tail". A representative set of 1,369 human bidirectional promoters was mapped to orthologous sequences in other mammals. We confirmed predictions for 5' UTRs in nine of ten manual picks in bovine based on comparison to the orthologous human promoter set and in six of seven predictions in human based on comparison to the bovine dataset. The two predictions that did not have orthology as bidirectional promoters in the other species resulted from unique events that initiated transcription in the opposite direction in only those species. We found evidence supporting the independent emergence of bidirectional promoters from the family of five RecQ helicase genes, which gained their bidirectional promoters and partner genes independently rather than through a duplication process. Furthermore, by expanding our comparisons from pairwise to multispecies analyses we developed a map representing a core set of bidirectional promoters in mammals. Conclusion We show that the orthologous positions of bidirectional

  17. Accurate and efficient band gap predictions of metal halide perovskites using the DFT-1/2 method: GW accuracy with DFT expense.

    Science.gov (United States)

    Tao, Shu Xia; Cao, Xi; Bobbert, Peter A

    2017-10-30

    The outstanding optoelectronics and photovoltaic properties of metal halide perovskites, including high carrier motilities, low carrier recombination rates, and the tunable spectral absorption range are attributed to the unique electronic properties of these materials. While DFT provides reliable structures and stabilities of perovskites, it performs poorly in electronic structure prediction. The relativistic GW approximation has been demonstrated to be able to capture electronic structure accurately, but at an extremely high computational cost. Here we report efficient and accurate band gap calculations of halide metal perovskites by using the approximate quasiparticle DFT-1/2 method. Using AMX 3 (A = CH 3 NH 3 , CH 2 NHCH 2 , Cs; M = Pb, Sn, X = I, Br, Cl) as demonstration, the influence of the crystal structure (cubic, tetragonal or orthorhombic), variation of ions (different A, M and X) and relativistic effects on the electronic structure are systematically studied and compared with experimental results. Our results show that the DFT-1/2 method yields accurate band gaps with the precision of the GW method with no more computational cost than standard DFT. This opens the possibility of accurate electronic structure prediction of sophisticated halide perovskite structures and new materials design for lead-free materials.

  18. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    Directory of Open Access Journals (Sweden)

    Sergei L Kosakovsky Pond

    2009-11-01

    Full Text Available Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1 are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5% fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance

  19. Improving DOE-2's RESYS routine: User defined functions to provide more accurate part load energy use and humidity predictions

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Hugh I.; Parker, Danny; Huang, Yu J.

    2000-08-04

    In hourly energy simulations, it is important to properly predict the performance of air conditioning systems over a range of full and part load operating conditions. An important component of these calculations is to properly consider the performance of the cycling air conditioner and how it interacts with the building. This paper presents improved approaches to properly account for the part load performance of residential and light commercial air conditioning systems in DOE-2. First, more accurate correlations are given to predict the degradation of system efficiency at part load conditions. In addition, a user-defined function for RESYS is developed that provides improved predictions of air conditioner sensible and latent capacity at part load conditions. The user function also provides more accurate predictions of space humidity by adding ''lumped'' moisture capacitance into the calculations. The improved cooling coil model and the addition of moisture capacitance predicts humidity swings that are more representative of the performance observed in real buildings.

  20. New density functional theory approaches for enabling prediction of chemical and physical properties of plutonium and other actinides.

    Energy Technology Data Exchange (ETDEWEB)

    Mattsson, Ann Elisabet

    2012-01-01

    Density Functional Theory (DFT) based Equation of State (EOS) construction is a prominent part of Sandia's capabilities to support engineering sciences. This capability is based on amending experimental data with information gained from computational investigations, in parts of the phase space where experimental data is hard, dangerous, or expensive to obtain. A prominent materials area where such computational investigations are hard to perform today because of limited accuracy is actinide and lanthanide materials. The Science of Extreme Environment Lab Directed Research and Development project described in this Report has had the aim to cure this accuracy problem. We have focused on the two major factors which would allow for accurate computational investigations of actinide and lanthanide materials: (1) The fully relativistic treatment needed for materials containing heavy atoms, and (2) the needed improved performance of DFT exchange-correlation functionals. We have implemented a fully relativistic treatment based on the Dirac Equation into the LANL code RSPt and we have shown that such a treatment is imperative when calculating properties of materials containing actinides and/or lanthanides. The present standard treatment that only includes some of the relativistic terms is not accurate enough and can even give misleading results. Compared to calculations previously considered state of the art, the Dirac treatment gives a substantial change in equilibrium volume predictions for materials with large spin-orbit coupling. For actinide and lanthanide materials, a Dirac treatment is thus a fundamental requirement in any computational investigation, including those for DFT-based EOS construction. For a full capability, a DFT functional capable of describing strongly correlated systems such as actinide materials need to be developed. Using the previously successful subsystem functional scheme developed by Mattsson et.al., we have created such a functional. In

  1. Toward an accurate prediction of inter-residue distances in proteins using 2D recursive neural networks.

    Science.gov (United States)

    Kukic, Predrag; Mirabello, Claudio; Tradigo, Giuseppe; Walsh, Ian; Veltri, Pierangelo; Pollastri, Gianluca

    2014-01-10

    Protein inter-residue contact maps provide a translation and rotation invariant topological representation of a protein. They can be used as an intermediary step in protein structure predictions. However, the prediction of contact maps represents an unbalanced problem as far fewer examples of contacts than non-contacts exist in a protein structure.In this study we explore the possibility of completely eliminating the unbalanced nature of the contact map prediction problem by predicting real-value distances between residues. Predicting full inter-residue distance maps and applying them in protein structure predictions has been relatively unexplored in the past. We initially demonstrate that the use of native-like distance maps is able to reproduce 3D structures almost identical to the targets, giving an average RMSD of 0.5Å. In addition, the corrupted physical maps with an introduced random error of ±6Å are able to reconstruct the targets within an average RMSD of 2Å.After demonstrating the reconstruction potential of distance maps, we develop two classes of predictors using two-dimensional recursive neural networks: an ab initio predictor that relies only on the protein sequence and evolutionary information, and a template-based predictor in which additional structural homology information is provided. We find that the ab initio predictor is able to reproduce distances with an RMSD of 6Å, regardless of the evolutionary content provided. Furthermore, we show that the template-based predictor exploits both sequence and structure information even in cases of dubious homology and outperforms the best template hit with a clear margin of up to 3.7Å.Lastly, we demonstrate the ability of the two predictors to reconstruct the CASP9 targets shorter than 200 residues producing the results similar to the state of the machine learning art approach implemented in the Distill server. The methodology presented here, if complemented by more complex reconstruction protocols

  2. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  3. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    Directory of Open Access Journals (Sweden)

    Wei Luo

    Full Text Available For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD outcomes (four NCDs and two major clinical risk factors, based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88 and those excluded from the development for use as a completely separated validation sample (median correlation 0.85, demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  4. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    Science.gov (United States)

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  5. Accurate particle speed prediction by improved particle speed measurement and 3-dimensional particle size and shape characterization technique

    DEFF Research Database (Denmark)

    Cernuschi, Federico; Rothleitner, Christian; Clausen, Sønnik

    2017-01-01

    methods, e.g. laser light scattering, and velocity by the double disk (DD) method. In this article we present two novel techniques, which allow a more accurate measurement of mass, velocity and shape, and we later compare the experimentally obtained flow velocities of particles with a simulation that also...... are compared with detailed 3-dimensional CT measurements and a low angle laser light scattering (LALLS) measurement system for six different samples of particles. It is shown that the particle volume or mass is usually overestimated by 16–22% when using 2-dimensional methods or LALLS. For CT allows...... additionally the surface-equivalent diameter to be calculated by using 2-dimensional projections of each particle, these results can be used to correct particle diameters measured with the particle imaging method using a pulsed LED....

  6. Accurate pan-specific prediction of peptide-MHC class II binding affinity with improved binding core identification

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Karosiene, Edita; Rasmussen, Michael

    2015-01-01

    A key event in the generation of a cellular response against malicious organisms through the endocytic pathway is binding of peptidic antigens by major histocompatibility complex class II (MHC class II) molecules. The bound peptide is then presented on the cell surface where it can be recognized...... by T helper lymphocytes. NetMHCIIpan is a state-of-the-art method for the quantitative prediction of peptide binding to any human or mouse MHC class II molecule of known sequence. In this paper, we describe an updated version of the method with improved peptide binding register identification. Binding...... register prediction is concerned with determining the minimal core region of nine residues directly in contact with the MHC binding cleft, a crucial piece of information both for the identification and design of CD4+ T cell antigens. When applied to a set of 51 crystal structures of peptide-MHC complexes...

  7. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems.

    Directory of Open Access Journals (Sweden)

    Ram Samudrala

    2009-04-01

    Full Text Available The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates--effector proteins--are not. We have used a novel computational approach to confidently identify new secreted effectors by integrating protein sequence-based features, including evolutionary measures such as the pattern of homologs in a range of other organisms, G+C content, amino acid composition, and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from the plant pathogen Pseudomonas syringae and validated on a set of effectors from the animal pathogen Salmonella enterica serovar Typhimurium (S. Typhimurium after eliminating effectors with detectable sequence similarity. We show that this approach can predict known secreted effectors with high specificity and sensitivity. Furthermore, by considering a large set of effectors from multiple organisms, we computationally identify a common putative secretion signal in the N-terminal 20 residues of secreted effectors. This signal can be used to discriminate 46 out of 68 total known effectors from both organisms, suggesting that it is a real, shared signal applicable to many type III secreted effectors. We use the method to make novel predictions of secreted effectors in S. Typhimurium, some of which have been experimentally validated. We also apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis, identifying the majority of known secreted proteins in addition to providing a number of novel predictions. This approach provides a new way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  8. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    Science.gov (United States)

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing.

  9. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    Directory of Open Access Journals (Sweden)

    Ting Wang

    Full Text Available Massively parallel sequencing (MPS combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs by sequencing cell-free fetal DNA (cffDNA from maternal plasma, so-called non-invasive prenatal testing (NIPT. However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR and false positive rate (FPR in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1% in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples, suggesting that it is reliable and robust enough for clinical testing.

  10. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Directory of Open Access Journals (Sweden)

    Carlo Baldassi

    Full Text Available In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids, exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i the prediction of residue-residue contacts in proteins, and (ii the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  11. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Science.gov (United States)

    Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  12. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing

    Science.gov (United States)

    Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  13. Can the Gibbs free energy of adsorption be predicted efficiently and accurately: an M05-2X DFT study.

    Science.gov (United States)

    Michalkova, A; Gorb, L; Hill, F; Leszczynski, J

    2011-03-24

    This study presents new insight into the prediction of partitioning of organic compounds between a carbon surface (soot) and water, and it also sheds light on the sluggish desorption of interacting molecules from activated and nonactivated carbon surfaces. This paper provides details about the structure and interactions of benzene, polycyclic aromatic hydrocarbons, and aromatic nitrocompounds with a carbon surface modeled by coronene using a density functional theory approach along with the M05-2X functional. The adsorption was studied in vacuum and from water solution. The molecules studied are physisorbed on the carbon surface. While the intermolecular interactions of benzene and hydrocarbons are governed by dispersion forces, nitrocompounds are adsorbed also due to quite strong electrostatic interactions with all types of carbon surfaces. On the basis of these results, we conclude that the method of prediction presented in this study allows one to approach the experimental level of accuracy in predicting thermodynamic parameters of adsorption on a carbon surface from the gas phase. The empirical modification of the polarized continuum model leads also to a quantitative agreement with the experimental data for the Gibbs free energy values of the adsorption from water solution.

  14. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  15. Accuration of Time Series and Spatial Interpolation Method for Prediction of Precipitation Distribution on the Geographical Information System

    Science.gov (United States)

    Prasetyo, S. Y. J.; Hartomo, K. D.

    2018-01-01

    The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some

  16. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain

    Directory of Open Access Journals (Sweden)

    Nicolas Panel

    2017-09-01

    Full Text Available PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or “PB/LIE” free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo. The overall performance of the model should allow its use in the design of new PDZ ligands in the future.

  17. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain.

    Science.gov (United States)

    Panel, Nicolas; Sun, Young Joo; Fuentes, Ernesto J; Simonson, Thomas

    2017-01-01

    PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB) continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or "PB/LIE" free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α 2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo . The overall performance of the model should allow its use in the design of new PDZ ligands in the future.

  18. Genomic inference accurately predicts the timing and severity of a recent bottleneck in a non-model insect population

    Science.gov (United States)

    McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.

    2015-01-01

    The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665

  19. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    Energy Technology Data Exchange (ETDEWEB)

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  20. Predicting Antimicrobial Resistance Prevalence and Incidence from Indicators of Antimicrobial Use: What Is the Most Accurate Indicator for Surveillance in Intensive Care Units?

    Directory of Open Access Journals (Sweden)

    Élise Fortin

    Full Text Available The optimal way to measure antimicrobial use in hospital populations, as a complement to surveillance of resistance is still unclear. Using respiratory isolates and antimicrobial prescriptions of nine intensive care units (ICUs, this study aimed to identify the indicator of antimicrobial use that predicted prevalence and incidence rates of resistance with the best accuracy.Retrospective cohort study including all patients admitted to three neonatal (NICU, two pediatric (PICU and four adult ICUs between April 2006 and March 2010. Ten different resistance/antimicrobial use combinations were studied. After adjustment for ICU type, indicators of antimicrobial use were successively tested in regression models, to predict resistance prevalence and incidence rates, per 4-week time period, per ICU. Binomial regression and Poisson regression were used to model prevalence and incidence rates, respectively. Multiplicative and additive models were tested, as well as no time lag and a one 4-week-period time lag. For each model, the mean absolute error (MAE in prediction of resistance was computed. The most accurate indicator was compared to other indicators using t-tests.Results for all indicators were equivalent, except for 1/20 scenarios studied. In this scenario, where prevalence of carbapenem-resistant Pseudomonas sp. was predicted with carbapenem use, recommended daily doses per 100 admissions were less accurate than courses per 100 patient-days (p = 0.0006.A single best indicator to predict antimicrobial resistance might not exist. Feasibility considerations such as ease of computation or potential external comparisons could be decisive in the choice of an indicator for surveillance of healthcare antimicrobial use.

  1. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry.

    Science.gov (United States)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe; Linnet, Kristian; Barron, Leon Patrick

    2018-03-23

    Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect and non-targeted screening. These allow for tentative identification of new compounds, and in-silico predicted reference values are used for improving confidence and filtering false-positive identifications. In this work, predictions of both RT and CCS values are performed with machine learning using artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model was investigated for the first time. The optimized combined RT-CCS model was a four-layered multi-layer perceptron ANN, and the 95th prediction error percentiles were within 2 min RT error and 5% relative CCS error for the external validation set (n = 36) and the full RT-CCS dataset (n = 357). 88.6% (n = 733) of predicted RTs were within 2 min error for the full dataset. Overall, when using 2 min RT error and 5% relative CCS error, 91.9% (n = 328) of compounds were retained, while 99.4% (n = 355) were retained when using at least one of these thresholds. This combined prediction approach can therefore be useful for rapid suspect/non-targeted screening involving HRMS, and will support current workflows. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Does 99mTc MAA study accurately predict the Hepatopulmonary shunt fraction of 90Y theraspheres?

    International Nuclear Information System (INIS)

    Jha, Ashish; Zade, A.; Monteiro, P.; Shah, S.; Purandare, N.C.; Rangarajan, V.; Kulkarni, S.; Kulkarni, A.; Shetty, Nitin

    2010-01-01

    Full text: Transarterial-radioembolisation (TARE) is FDA approved therapeutic option for primary and metastatic liver malignancy when patient is inoperable; which in addition to the embolic effect (as seen with Transarterial- chemoembolisation-TACE) also gives the benefit of selectively irradiation to the target lesions with minimal toxicity to adjacent normal hepatocytes. However there is a risk of shunting of radioactive spheres to pulmonary circulation and subsequent pulmonary toxicity if the hepatopulmonary shunt fraction is high. The estimated lung dose becomes the limiting factor for the dose that can be delivered trans-arterially for radioembolisation of hepatic neoplasms.This is achieved by a pretreatment 99m Tc MAA study. Aim: The accuracy of 99m Tc-MAA Scintigraphy to predict the hepatopulmonary shunt fraction of 90 Y Theraspheres was evaluated by comparing it with that obtained by post therapeutic Bremsstrahlung imaging. Materials and Methods: Patients: 13 patients who underwent 90 Y Theraspheres radioembolisation of hepatic malignancies (both primary and secondary) underwent pre therapeutic 99m Tc- MAA Scintigraphy and post therapeutic 90 Y Bremsstrahlung Scintigraphy. 10-12 mCi of freshly prepared 99m Tc MAA was administered by selective hepatic artery cauterization. Planar and tomographic images were acquired within 1hr of radiopharmaceutical administration. IMAGE ACQUISITION 99m Tc MAA static images were acquired in 256 x 256 matrix (1000 KCnts) and SPECT were a 128 x 128 matrix with 64 frames (20 s/frame). The scan parameters for CT were 140 kV, 2.5 mAs, and 1-cm slices. SPECT images were corrected for attenuation and scatter. Post therapeutic 90 Y Bremsstrahlung imaging was done with HEGP collimator with photo peak centered at 140 KeV - 64.29% and +56% window width. SPECT/CT images were obtained using a dual-detector gamma-camera with a mounted 1-row CT scanner (Infinia Hawkeye; GE medical systems) to evaluate hepatic and extra hepatic tracer

  3. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    Science.gov (United States)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed

  4. BacPP: bacterial promoter prediction--a tool for accurate sigma-factor specific assignment in enterobacteria.

    Science.gov (United States)

    de Avila E Silva, Scheila; Echeverrigaray, Sergio; Gerhardt, Günther J L

    2011-10-21

    Promoter sequences are well known to play a central role in gene expression. Their recognition and assignment in silico has not consolidated into a general bioinformatics method yet. Most previously available algorithms employ and are limited to σ70-dependent promoter sequences. This paper presents a new tool named BacPP, designed to recognize and predict Escherichia coli promoter sequences from background with specific accuracy for each σ factor (respectively, σ24, 86.9%; σ28, 92.8%; σ32, 91.5%; σ38, 89.3%, σ54, 97.0%; and σ70, 83.6%). BacPP is hence outstanding in recognition and assignment of sequences according to σ factor and provide circumstantial information about upstream gene sequences. This bioinformatic tool was developed by weighing rules extracted from neural networks trained with promoter sequences known to respond to a specific σ factor. Furthermore, when challenged with promoter sequences belonging to other enterobacteria BacPP maintained 76% accuracy overall. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. HDL size is more accurate than HDL cholesterol to predict carotid subclinical atherosclerosis in individuals classified as low cardiovascular risk.

    Directory of Open Access Journals (Sweden)

    Eliane Soler Parra

    Full Text Available Misclassification of patients as low cardiovascular risk (LCR remains a major concern and challenges the efficacy of traditional risk markers. Due to its strong association with cholesterol acceptor capacity, high-density lipoprotein (HDL size has been appointed as a potential risk marker. Hence, we investigate whether HDL size improves the predictive value of HDL-cholesterol in the identification of carotid atherosclerotic burden in individuals stratified to be at LCR.284 individuals (40-75 years classified as LCR by the current US guidelines were selected in a three-step procedure from primary care centers of the cities of Campinas and Americana, SP, Brazil. Apolipoprotein B-containing lipoproteins were precipitated by polyethylene glycol and HDL size was measured by dynamic light scattering (DLS technique. Participants were classified in tertiles of HDL size (8.22 nm. Carotid intima-media thickness (cIMT 8.22 nm was independently associated with low cIMT in either unadjusted and adjusted models for age, gender and Homeostasis Model Assessment 2 index for insulin sensitivity, ethnicity and body mass index (Odds ratio 0.23; 95% confidence interval 0.07-0.74, p = 0.013.The mean HDL size estimated with DLS constitutes a better predictor for subclinical carotid atherosclerosis than the conventional measurements of plasma HDL-cholesterol in individuals classified as LCR.

  6. HDL size is more accurate than HDL cholesterol to predict carotid subclinical atherosclerosis in individuals classified as low cardiovascular risk.

    Science.gov (United States)

    Parra, Eliane Soler; Panzoldo, Natalia Baratella; Zago, Vanessa Helena de Souza; Scherrer, Daniel Zanetti; Alexandre, Fernanda; Bakkarat, Jamal; Nunes, Valeria Sutti; Nakandakare, Edna Regina; Quintão, Eder Carlos Rocha; Nadruz, Wilson; de Faria, Eliana Cotta; Sposito, Andrei C

    2014-01-01

    Misclassification of patients as low cardiovascular risk (LCR) remains a major concern and challenges the efficacy of traditional risk markers. Due to its strong association with cholesterol acceptor capacity, high-density lipoprotein (HDL) size has been appointed as a potential risk marker. Hence, we investigate whether HDL size improves the predictive value of HDL-cholesterol in the identification of carotid atherosclerotic burden in individuals stratified to be at LCR. 284 individuals (40-75 years) classified as LCR by the current US guidelines were selected in a three-step procedure from primary care centers of the cities of Campinas and Americana, SP, Brazil. Apolipoprotein B-containing lipoproteins were precipitated by polyethylene glycol and HDL size was measured by dynamic light scattering (DLS) technique. Participants were classified in tertiles of HDL size (8.22 nm). Carotid intima-media thickness (cIMT) 8.22 nm was independently associated with low cIMT in either unadjusted and adjusted models for age, gender and Homeostasis Model Assessment 2 index for insulin sensitivity, ethnicity and body mass index (Odds ratio 0.23; 95% confidence interval 0.07-0.74, p = 0.013). The mean HDL size estimated with DLS constitutes a better predictor for subclinical carotid atherosclerosis than the conventional measurements of plasma HDL-cholesterol in individuals classified as LCR.

  7. Predicting the effectiveness of extracorporeal shock wave lithotripsy on urinary tract stones. Risk groups for accurate retreatment.

    Science.gov (United States)

    Hevia, M; García, Á; Ancizu, F J; Merino, I; Velis, J M; Tienza, A; Algarra, R; Doménech, P; Diez-Caballero, F; Rosell, D; Pascual, J I; Robles, J E

    2017-09-01

    Extracorporeal shock wave lithotripsy (ESWL) is a non-invasive, safe and effective treatment for urinary tract lithiasis. Its effectiveness varies depending on the location and size of the stones as well as other factors; several sessions are occasionally required. The objective is to attempt to predict its success or failure, when the influential variables are known beforehand. We analysed 211 patients who had had previous CT scans and were treated with ESWL between 2010 and 2014. The influential variables in requiring retreatment were studied using binary logistic regression models (univariate and multivariate analysis): maximum density, maximum diameter, area, location, disintegration and distance from the adipose panniculus. With the influential variables, a risk model was designed by assessing all possible combinations with logistic regression (version 20.0 IBM SPSS). The independent influential variables on the need for retreatment are: maximum density >864HU, maximum diameter >7.5mm and pyelocaliceal location. Using these variables, the best model includes 3risk groups with a probability of requiring significantly different retreatment: group 1-low risk (0 variables) with 20.2%; group 2-intermediate risk (1-2 variables) with 49.2%; and group 3-high risk (3 variables) with 62.5%. The density, maximum diameter and pyelocaliceal location of the stones are determinant factors in terms of the effectiveness of treatment with ESWL. Using these variables, which can be obtained in advance of deciding on a treatment, the designed risk model provides a precise approach in choosing the most appropriate treatment for each particular case. Copyright © 2017 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Pneumococcal pneumonia – Are the new severity scores more accurate in predicting adverse outcomes?

    Directory of Open Access Journals (Sweden)

    C. Ribeiro

    2013-11-01

    Full Text Available Introduction: The site-of-care decision is one of the most important factors in the management of patients with community-acquired pneumonia. The severity scores are validated prognostic tools for community-acquired pneumonia mortality and treatment site decision.The aim of this paper was to compare the discriminatory power of four scores – the classic PSI and CURB65 and the most recent SCAP and SMART-COP – in predicting major adverse events: death, ICU admission, need for invasive mechanical ventilation or vasopressor support in patients admitted with pneumococcal pneumonia. Methods: A five-year retrospective study of patients admitted for pneumococcal pneumonia.Patients were stratified based on admission data and assigned to low-, intermediate-, and high-risk classes for each score. Results were obtained comparing low versus non-low risk classes. Results: We studied 142 episodes of hospitalization with 2 deaths and 10 patients needing mechanical ventilation and vasopressor support. The majority of patients were classified as low risk by all scores – we found high negative predictive values for all adverse events studied, the most negative value corresponding to the SCAP score. The more recent scores showed better accuracy for predicting ICU admission and need for ventilation or vasopressor support (mostly for the SCAP score with higher AUC values for all adverse events. Conclusions: The rate of all adverse outcomes increased directly with increasing risk class in all scores. The new gravity scores appear to have a higher discriminatory power in all adverse events in our study, particularly, the SCAP score. Resumo: Introdução: A decisão do local de tratamento é um dos fatores mais importantes na abordagem de doentes com pneumonia adquirida na comunidade. Os scores de gravidade são ferramentas prognósticas validadas para previsão da mortalidade por pneumonia adquirida na comunidade e decisão do local de

  9. Predicting College Students' First Year Success: Should Soft Skills Be Taken into Consideration to More Accurately Predict the Academic Achievement of College Freshmen?

    Science.gov (United States)

    Powell, Erica Dion

    2013-01-01

    This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…

  10. In 'big bang' major incidents do triage tools accurately predict clinical priority?: a systematic review of the literature.

    Science.gov (United States)

    Kilner, T M; Brace, S J; Cooke, M W; Stallard, N; Bleetman, A; Perkins, G D

    2011-05-01

    The term "big bang" major incidents is used to describe sudden, usually traumatic,catastrophic events, involving relatively large numbers of injured individuals, where demands on clinical services rapidly outstrip the available resources. Triage tools support the pre-hospital provider to prioritise which patients to treat and/or transport first based upon clinical need. The aim of this review is to identify existing triage tools and to determine the extent to which their reliability and validity have been assessed. A systematic review of the literature was conducted to identify and evaluate published data validating the efficacy of the triage tools. Studies using data from trauma patients that report on the derivation, validation and/or reliability of the specific pre-hospital triage tools were eligible for inclusion.Purely descriptive studies, reviews, exercises or reports (without supporting data) were excluded. The search yielded 1982 papers. After initial scrutiny of title and abstract, 181 papers were deemed potentially applicable and from these 11 were identified as relevant to this review (in first figure). There were two level of evidence one studies, three level of evidence two studies and six level of evidence three studies. The two level of evidence one studies were prospective validations of Clinical Decision Rules (CDR's) in children in South Africa, all the other studies were retrospective CDR derivation, validation or cohort studies. The quality of the papers was rated as good (n=3), fair (n=7), poor (n=1). There is limited evidence for the validity of existing triage tools in big bang major incidents.Where evidence does exist it focuses on sensitivity and specificity in relation to prediction of trauma death or severity of injury based on data from single or small number patient incidents. The Sacco system is unique in combining survivability modelling with the degree by which the system is overwhelmed in the triage decision system. The

  11. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  12. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  13. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  14. Profile-QSAR: a novel meta-QSAR method that combines activities across the kinase family to accurately predict affinity, selectivity, and cellular activity.

    Science.gov (United States)

    Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna

    2011-08-22

    Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be

  15. Current predictive models do not accurately differentiate between single and multi gland disease in primary hyperparathyroidism: a retrospective cohort study of two endocrine surgery units.

    Science.gov (United States)

    Edafe, O; Collins, E E; Ubhi, C S; Balasubramanian, S P

    2018-02-01

    Background Minimally invasive parathyroidectomy (MIP) for primary hyperparathyroidism is dependent upon accurate prediction of single-gland disease on the basis of preoperative imaging and biochemistry. The aims of this study were to validate currently available predictive models of single-gland disease in two UK cohorts and to determine if these models can facilitate MIP. Methods This is a retrospectively cohort study of 624 patients who underwent parathyroidectomy for primary hyperparathyroidism in two centres between July 2008 and December 2013. Two recognised models: CaPTHUS (preoperative calcium, parathyroid hormone, ultrasound, sestamibi, concordance imaging) and Wisconsin Index (preoperative calcium, parathyroid hormone) were validated for their ability to predict single-gland disease. Results The rates of single- and multi-gland disease were 491 (79.6%) and 126 (20.2%), respectively. Cure rates in centres 1 and 2 were 93.2% and 93.8%, respectively (P = 0.789). The positive predictive value (PPV) of CaPTHUS score . 3 in predicting single-gland disease was 84.6%, compared with 100% in the original report. CaPTHUS . 4 and 5 had a PPV of 85.1 and 87.1, respectively. There were no differences in Wisconsin Index (WIN) between patients with single- and multi-gland (P = 0.573). A WIN greater than 1600 and weight of excised gland greater than 1 g had a positive predictive value of 86.7% for single-gland disease. Conclusions The use of CaPTHUS and WIN indices without intraoperative adjuncts (such as IOPTH) had the potential to result in failure to cure in up to 15% (CaPTHUS) and 13% (WIN) of patients treated by MIP targeting a single enlarged gland.

  16. Accurate predictions of population-level changes in sequence and structural properties of HIV-1 Env using a volatility-controlled diffusion model.

    Science.gov (United States)

    DeLeon, Orlando; Hodis, Hagit; O'Malley, Yunxia; Johnson, Jacklyn; Salimi, Hamid; Zhai, Yinjie; Winter, Elizabeth; Remec, Claire; Eichelberger, Noah; Van Cleave, Brandon; Puliadi, Ramya; Harrington, Robert D; Stapleton, Jack T; Haim, Hillel

    2017-04-01

    The envelope glycoproteins (Envs) of HIV-1 continuously evolve in the host by random mutations and recombination events. The resulting diversity of Env variants circulating in the population and their continuing diversification process limit the efficacy of AIDS vaccines. We examined the historic changes in Env sequence and structural features (measured by integrity of epitopes on the Env trimer) in a geographically defined population in the United States. As expected, many Env features were relatively conserved during the 1980s. From this state, some features diversified whereas others remained conserved across the years. We sought to identify "clues" to predict the observed historic diversification patterns. Comparison of viruses that cocirculate in patients at any given time revealed that each feature of Env (sequence or structural) exists at a defined level of variance. The in-host variance of each feature is highly conserved among individuals but can vary between different HIV-1 clades. We designate this property "volatility" and apply it to model evolution of features as a linear diffusion process that progresses with increasing genetic distance. Volatilities of different features are highly correlated with their divergence in longitudinally monitored patients. Volatilities of features also correlate highly with their population-level diversification. Using volatility indices measured from a small number of patient samples, we accurately predict the population diversity that developed for each feature over the course of 30 years. Amino acid variants that evolved at key antigenic sites are also predicted well. Therefore, small "fluctuations" in feature values measured in isolated patient samples accurately describe their potential for population-level diversification. These tools will likely contribute to the design of population-targeted AIDS vaccines by effectively capturing the diversity of currently circulating strains and addressing properties of variants

  17. Can Kohn-Sham density functional theory predict accurate charge distributions for both single-reference and multi-reference molecules?

    Science.gov (United States)

    Verma, Pragya; Truhlar, Donald G

    2017-05-24

    Dipole moments are the first moment of electron density and are fundamental quantities that are often available from experiments. An exchange-correlation functional that leads to an accurate representation of the charge distribution of a molecule should accurately predict the dipole moments of the molecule. It is well known that Kohn-Sham density functional theory (DFT) is more accurate for the energetics of single-reference systems than for the energetics of multi-reference ones, but there has been less study of charge distributions. In this work, we benchmark 48 density functionals chosen with various combinations of ingredients, against accurate experimental data for dipole moments of 78 molecules, in particular 55 single-reference molecules and 23 multi-reference ones. We chose both organic and inorganic molecules, and within the category of inorganic molecules there are both main-group and transition-metal-containing molecules, with some of them being multi-reference. As one would expect, the multi-reference molecules are not as well described by single-reference DFT, and the functionals tested in this work do show larger mean unsigned errors (MUEs) for the 23 multi-reference molecules than the single-reference ones. Five of the 78 molecules have relatively large experimental error bars and were therefore not included in calculating the overall MUEs. For the 73 molecules not excluded, we find that three of the hybrid functionals, B97-1, PBE0, and TPSSh (each with less than or equal to 25% Hartree-Fock (HF) exchange), the range-separated hybrid functional, HSE06 (with HF exchange decreasing from 25% to 0 as interelectronic distance increases), and the hybrid functional, PW6B95 (with 28% HF exchange) are the best performing functionals with each yielding an MUE of 0.18 D. Perhaps the most significant finding of this study is that there exists great similarity among the success rate of various functionals in predicting dipole moments. In particular, of 39

  18. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    Science.gov (United States)

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  19. The Need for Accurate Risk Prediction Models for Road Mapping, Shared Decision Making and Care Planning for the Elderly with Advanced Chronic Kidney Disease.

    Science.gov (United States)

    Stryckers, Marijke; Nagler, Evi V; Van Biesen, Wim

    2016-11-01

    As people age, chronic kidney disease becomes more common, but it rarely leads to end-stage kidney disease. When it does, the choice between dialysis and conservative care can be daunting, as much depends on life expectancy and personal expectations of medical care. Shared decision making implies adequately informing patients about their options, and facilitating deliberation of the available information, such that decisions are tailored to the individual's values and preferences. Accurate estimations of one's risk of progression to end-stage kidney disease and death with or without dialysis are essential for shared decision making to be effective. Formal risk prediction models can help, provided they are externally validated, well-calibrated and discriminative; include unambiguous and measureable variables; and come with readily applicable equations or scores. Reliable, externally validated risk prediction models for progression of chronic kidney disease to end-stage kidney disease or mortality in frail elderly with or without chronic kidney disease are scant. Within this paper, we discuss a number of promising models, highlighting both the strengths and limitations physicians should understand for using them judiciously, and emphasize the need for external validation over new development for further advancing the field.

  20. A simple, fast and accurate thermodynamic-based approach for transfer and prediction of GC retention times between columns and instruments Part III: Retention time prediction on target column.

    Science.gov (United States)

    Hou, Siyuan; Stevenson, Keisean A J M; Harynuk, James J

    2018-03-27

    This is the third part of a three-part series of papers. In Part I we presented a method for determining the actual effective geometry of a reference column as well as the thermodynamic-based parameters of a set of probe compounds in an in-house mixture. Part II introduced an approach for estimating the actual effective geometry of a target column by collecting retention data of the same mixture of probe compounds on the target column and using their thermodynamic parameters, acquired on the reference column, as a bridge between both systems. Part III, presented here, demonstrates the retention time transfer and prediction from the reference column to the target column using experimental data for a separate mixture of compounds. To predict the retention time of a new compound we first estimate its thermodynamic-based parameters on the reference column (using geometric parameters determined previously). The compound's retention time on a second column (of previously determined geometry) is then predicted. The models and the associated optimization algorithms were tested using simulated and experimental data. The accuracy of predicted retention times shows that the proposed approach is simple, fast and accurate for retention time transfer and prediction between gas chromatography columns. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Quantifying the retention of foam formulation components to sedimentary phases to enable predictions of mobility and treatment efficacy - 59369

    International Nuclear Information System (INIS)

    Ramirez, Rosa; Jansik, Danielle; Wellman, Dawn

    2012-01-01

    Document available in abstract form only. Full text of publication follows: Deep vadose zone remediation remains the most challenging remediation problem in the DOE Complex. Foam delivery technology is being developed as a method for delivering remedial amendments within vadose zone environments for in situ contaminant stabilization. Thus far, the physical propagation of foam within subsurface media has been evaluated and quantified. However, foam propagation is a product of surfactant sorption which directly impacts foam stability. In order to predict the stability of foam during subsurface transport it is necessary to quantify the sorption of foam components as a function of concentration, competitive sorption, sediment mineralogy, and temperature. This investigation provides the results of standard static batch test quantifying these relationships. High Performance Liquid Chromatography (HPLC) was used to measure surfactant concentrations. The results of this investigation provide necessary understanding to predict foam stability during subsurface transport and determination of the remedial radius of influence. This study is part of a multiple step process for demonstrating the feasibility of foam transport to distribute amendments within in the vadose zone. (authors)

  2. bSiteFinder, an improved protein-binding sites prediction server based on structural alignment: more accurate and less time-consuming.

    Science.gov (United States)

    Gao, Jun; Zhang, Qingchen; Liu, Min; Zhu, Lixin; Wu, Dingfeng; Cao, Zhiwei; Zhu, Ruixin

    2016-01-01

    Protein-binding sites prediction lays a foundation for functional annotation of protein and structure-based drug design. As the number of available protein structures increases, structural alignment based algorithm becomes the dominant approach for protein-binding sites prediction. However, the present algorithms underutilize the ever increasing numbers of three-dimensional protein-ligand complex structures (bound protein), and it could be improved on the process of alignment, selection of templates and clustering of template. Herein, we built so far the largest database of bound templates with stringent quality control. And on this basis, bSiteFinder as a protein-binding sites prediction server was developed. By introducing Homology Indexing, Chain Length Indexing, Stability of Complex and Optimized Multiple-Templates Clustering into our algorithm, the efficiency of our server has been significantly improved. Further, the accuracy was approximately 2-10 % higher than that of other algorithms for the test with either bound dataset or unbound dataset. For 210 bound dataset, bSiteFinder achieved high accuracies up to 94.8 % (MCC 0.95). For another 48 bound/unbound dataset, bSiteFinder achieved high accuracies up to 93.8 % for bound proteins (MCC 0.95) and 85.4 % for unbound proteins (MCC 0.72). Our bSiteFinder server is freely available at http://binfo.shmtu.edu.cn/bsitefinder/, and the source code is provided at the methods page. An online bSiteFinder server is freely available at http://binfo.shmtu.edu.cn/bsitefinder/. Our work lays a foundation for functional annotation of protein and structure-based drug design. With ever increasing numbers of three-dimensional protein-ligand complex structures, our server should be more accurate and less time-consuming.Graphical Abstract bSiteFinder (http://binfo.shmtu.edu.cn/bsitefinder/) as a protein-binding sites prediction server was developed based on the largest database of bound templates so far with stringent quality

  3. Quantitative Imaging of Turbulent Mixing Dynamics in High-Pressure Fuel Injection to Enable Predictive Simulations of Engine Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Jonathan H. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Reacting Flows Dept.; Pickett, Lyle M. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Bisson, Scott E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Remote Sensing and Energetic Materials Dept.; Patterson, Brian D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). combustion Chemistry Dept.; Ruggles, Adam J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Reacting Flows Dept.; Skeen, Scott A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Manin, Julien Luc [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Huang, Erxiong [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Reacting Flows Dept.; Cicone, Dave J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Sphicas, Panos [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.

    2015-09-01

    In this LDRD project, we developed a capability for quantitative high - speed imaging measurements of high - pressure fuel injection dynamics to advance understanding of turbulent mixing in transcritical flows, ignition, and flame stabilization mechanisms, and to provide e ssential validation data for developing predictive tools for engine combustion simulations. Advanced, fuel - efficient engine technologies rely on fuel injection into a high - pressure, high - temperature environment for mixture preparation and com bustion. Howe ver, the dynamics of fuel injection are not well understood and pose significant experimental and modeling challenges. To address the need for quantitative high - speed measurements, we developed a Nd:YAG laser that provides a 5ms burst of pulses at 100 kHz o n a robust mobile platform . Using this laser, we demonstrated s patially and temporally resolved Rayleigh scattering imaging and particle image velocimetry measurements of turbulent mixing in high - pressure gas - phase flows and vaporizing sprays . Quantitativ e interpretation of high - pressure measurements was advanced by reducing and correcting interferences and imaging artifacts.

  4. High-order feature-based mixture models of classification learning predict individual learning curves and enable personalized teaching.

    Science.gov (United States)

    Cohen, Yarden; Schneidman, Elad

    2013-01-08

    Pattern classification learning tasks are commonly used to explore learning strategies in human subjects. The universal and individual traits of learning such tasks reflect our cognitive abilities and have been of interest both psychophysically and clinically. From a computational perspective, these tasks are hard, because the number of patterns and rules one could consider even in simple cases is exponentially large. Thus, when we learn to classify we must use simplifying assumptions and generalize. Studies of human behavior in probabilistic learning tasks have focused on rules in which pattern cues are independent, and also described individual behavior in terms of simple, single-cue, feature-based models. Here, we conducted psychophysical experiments in which people learned to classify binary sequences according to deterministic rules of different complexity, including high-order, multicue-dependent rules. We show that human performance on such tasks is very diverse, but that a class of reinforcement learning-like models that use a mixture of features captures individual learning behavior surprisingly well. These models reflect the important role of subjects' priors, and their reliance on high-order features even when learning a low-order rule. Further, we show that these models predict future individual answers to a high degree of accuracy. We then use these models to build personally optimized teaching sessions and boost learning.

  5. A non-parametric mixture model for genome-enabled prediction of genetic value for a quantitative trait.

    Science.gov (United States)

    Gianola, Daniel; Wu, Xiao-Lin; Manfredi, Eduardo; Simianer, Henner

    2010-10-01

    A Bayesian nonparametric form of regression based on Dirichlet process priors is adapted to the analysis of quantitative traits possibly affected by cryptic forms of gene action, and to the context of SNP-assisted genomic selection, where the main objective is to predict a genomic signal on phenotype. The procedure clusters unknown genotypes into groups with distinct genetic values, but in a setting in which the number of clusters is unknown a priori, so that standard methods for finite mixture analysis do not work. The central assumption is that genetic effects follow an unknown distribution with some "baseline" family, which is a normal process in the cases considered here. A Bayesian analysis based on the Gibbs sampler produces estimates of the number of clusters, posterior means of genetic effects, a measure of credibility in the baseline distribution, as well as estimates of parameters of the latter. The procedure is illustrated with a simulation representing two populations. In the first one, there are 3 unknown QTL, with additive, dominance and epistatic effects; in the second, there are 10 QTL with additive, dominance and additive × additive epistatic effects. In the two populations, baseline parameters are inferred correctly. The Dirichlet process model infers the number of unique genetic values correctly in the first population, but it produces an understatement in the second one; here, the true number of clusters is over 900, and the model gives a posterior mean estimate of about 140, probably because more replication of genotypes is needed for correct inference. The impact on inferences of the prior distribution of a key parameter (M), and of the extent of replication, was examined via an analysis of mean body weight in 192 paternal half-sib families of broiler chickens, where each sire was genotyped for nearly 7,000 SNPs. In this small sample, it was found that inference about the number of clusters was affected by the prior distribution of M. For a

  6. Application of neural networks with back-propagation to genome-enabled prediction of complex traits in Holstein-Friesian and German Fleckvieh cattle.

    Science.gov (United States)

    Ehret, Anita; Hochstuhl, David; Gianola, Daniel; Thaller, Georg

    2015-03-31

    Recently, artificial neural networks (ANN) have been proposed as promising machines for marker-based genomic predictions of complex traits in animal and plant breeding. ANN are universal approximators of complex functions, that can capture cryptic relationships between SNPs (single nucleotide polymorphisms) and phenotypic values without the need of explicitly defining a genetic model. This concept is attractive for high-dimensional and noisy data, especially when the genetic architecture of the trait is unknown. However, the properties of ANN for the prediction of future outcomes of genomic selection using real data are not well characterized and, due to high computational costs, using whole-genome marker sets is difficult. We examined different non-linear network architectures, as well as several genomic covariate structures as network inputs in order to assess their ability to predict milk traits in three dairy cattle data sets using large-scale SNP data. For training, a regularized back propagation algorithm was used. The average correlation between the observed and predicted phenotypes in a 20 times 5-fold cross-validation was used to assess predictive ability. A linear network model served as benchmark. Predictive abilities of different ANN models varied markedly, whereas differences between data sets were small. Dimension reduction methods enhanced prediction performance in all data sets, while at the same time computational cost decreased. For the Holstein-Friesian bull data set, an ANN with 10 neurons in the hidden layer achieved a predictive correlation of r=0.47 for milk yield when the entire marker matrix was used. Predictive ability increased when the genomic relationship matrix (r=0.64) was used as input and was best (r=0.67) when principal component scores of the marker genotypes were used. Similar results were found for the other traits in all data sets. Artificial neural networks are powerful machines for non-linear genome-enabled predictions in

  7. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    Science.gov (United States)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  8. Spectral analysis-based risk score enables early prediction of mortality and cerebral performance in patients undergoing therapeutic hypothermia for ventricular fibrillation and comatose status

    Science.gov (United States)

    Filgueiras-Rama, David; Calvo, Conrado J.; Salvador-Montañés, Óscar; Cádenas, Rosalía; Ruiz-Cantador, Jose; Armada, Eduardo; Rey, Juan Ramón; Merino, J.L.; Peinado, Rafael; Pérez-Castellano, Nicasio; Pérez-Villacastín, Julián; Quintanilla, Jorge G.; Jiménez, Santiago; Castells, Francisco; Chorro, Francisco J.; López-Sendón, J.L.; Berenfeld, Omer; Jalife, José; López de Sá, Esteban; Millet, José

    2017-01-01

    Background Early prognosis in comatose survivors after cardiac arrest due to ventricular fibrillation (VF) is unreliable, especially in patients undergoing mild hypothermia. We aimed at developing a reliable risk-score to enable early prediction of cerebral performance and survival. Methods Sixty-one out of 239 consecutive patients undergoing mild hypothermia after cardiac arrest, with eventual return of spontaneous circulation (ROSC), and comatose status on admission fulfilled the inclusion criteria. Background clinical variables, VF time and frequency domain fundamental variables were considered. The primary and secondary outcomes were a favorable neurological performance (FNP) during hospitalization and survival to hospital discharge, respectively. The predictive model was developed in a retrospective cohort (n=32; September 2006–September 2011, 48.5 ± 10.5 months of follow-up) and further validated in a prospective cohort (n = 29; October 2011–July 2013, 5 ± 1.8 months of follow-up). Results FNP was present in 16 (50.0%) and 21 patients (72.4%) in the retrospective and prospective cohorts, respectively. Seventeen (53.1%) and 21 patients (72.4%), respectively, survived to hospital discharge. Both outcomes were significantly associated (p < 0.001). Retrospective multivariate analysis provided a prediction model (sensitivity= 0.94, specificity = 1) that included spectral dominant frequency, derived power density and peak ratios between high and low frequency bands, and the number of shocks delivered before ROSC. Validation on the prospective cohort showed sensitivity = 0.88 and specificity = 0.91. A model-derived risk-score properly predicted 93% of FNP. Testing the model on follow-up showed a c-statistic ≥ 0.89. Conclusions A spectral analysis-based model reliably correlates time-dependent VF spectral changes with acute cerebral injury in comatose survivors undergoing mild hypothermia after cardiac arrest. PMID:25828128

  9. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry

    DEFF Research Database (Denmark)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe

    2018-01-01

    Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect...

  10. Deep nirS amplicon sequencing of San Francisco Bay sediments enables prediction of geography and environmental conditions from denitrifying community composition.

    Science.gov (United States)

    Lee, Jessica A; Francis, Christopher A

    2017-12-01

    Denitrification is a dominant nitrogen loss process in the sediments of San Francisco Bay. In this study, we sought to understand the ecology of denitrifying bacteria by using next-generation sequencing (NGS) to survey the diversity of a denitrification functional gene, nirS (encoding cytchrome-cd 1 nitrite reductase), along the salinity gradient of San Francisco Bay over the course of a year. We compared our dataset to a library of nirS sequences obtained previously from the same samples by standard PCR cloning and Sanger sequencing, and showed that both methods similarly demonstrated geography, salinity and, to a lesser extent, nitrogen, to be strong determinants of community composition. Furthermore, the depth afforded by NGS enabled novel techniques for measuring the association between environment and community composition. We used Random Forests modelling to demonstrate that the site and salinity of a sample could be predicted from its nirS sequences, and to identify indicator taxa associated with those environmental characteristics. This work contributes significantly to our understanding of the distribution and dynamics of denitrifying communities in San Francisco Bay, and provides valuable tools for the further study of this key N-cycling guild in all estuarine systems. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  11. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    Science.gov (United States)

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  12. Accurate prediction of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising

    Science.gov (United States)

    Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-01-01

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195

  13. Human glycemic response curves after intake of carbohydrate foods are accurately predicted by combining in vitro gastrointestinal digestion with in silico kinetic modeling

    Directory of Open Access Journals (Sweden)

    Susann Bellmann

    2018-02-01

    Conclusion: Based on the demonstrated accuracy and predictive quality, this in vitro–in silico technology can be used for the testing of food products on their glycemic response under standardized conditions and may stimulate the production of (slow carbs for the prevention of metabolic diseases.

  14. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M

    2008-01-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding...

  15. Fecal Calprotectin is an Accurate Tool and Correlated to Seo Index in Prediction of Relapse in Iranian Patients With Ulcerative Colitis.

    Science.gov (United States)

    Hosseini, Seyed Vahid; Jafari, Peyman; Taghavi, Seyed Alireza; Safarpour, Ali Reza; Rezaianzadeh, Abbas; Moini, Maryam; Mehrabi, Manoosh

    2015-02-01

    The natural clinical course of Ulcerative Colitis (UC) is characterized by episodes of relapse and remission. Fecal Calprotectin (FC) is a relatively new marker of intestinal inflammation and is an available, non-expensive tool for predicting relapse of quiescent UC. The Seo colitis activity index is a clinical index for assessment of the severity of UC. The present study aimed to evaluate the accuracy of FC and the Seo colitis activity index and their correlation in prediction of UC exacerbation. In this prospective cohort study, 157 patients with clinical and endoscopic diagnosis of UC selected randomly from 1273 registered patients in Fars province's IBD registry center in Shiraz, Iran, were followed from October 2012 to October 2013 for 12 months or shorter, if they had a relapse. Two patients left the study before completion and one patient had relapse because of discontinuation of drugs. The participants' clinical and serum factors were evaluated every three months. Furthermore, stool samples were collected at the beginning of study and every three months and FC concentration (commercially available enzyme linked immunoassay) and the Seo Index were assessed. Then univariate analysis, multiple variable logistic regression, Receiver Operating Characteristics (ROC) curve analysis, and Pearson's correlation test (r) were used for statistical analysis of data. According to the results, 74 patients (48.1%) relapsed during the follow-up (33 men and 41 women). Mean ± SD of FC was 862.82 ± 655.97 μg/g and 163.19 ± 215.85 μg/g in relapsing and non-relapsing patients, respectively (P Seo index were significant predictors of relapse. ROC curve analysis of FC level and Seo activity index for prediction of relapse demonstrated area under the curve of 0.882 (P Seo index was significant in prediction of relapse (r = 0.63, P Seo activity index in prediction of relapse in the course of quiescent UC in Iranian patients.

  16. Closed-loop spontaneous baroreflex transfer function is inappropriate for system identification of neural arc but partly accurate for peripheral arc: predictability analysis

    Science.gov (United States)

    Kamiya, Atsunori; Kawada, Toru; Shimizu, Shuji; Sugimachi, Masaru

    2011-01-01

    Abstract Although the dynamic characteristics of the baroreflex system have been described by baroreflex transfer functions obtained from open-loop analysis, the predictability of time-series output dynamics from input signals, which should confirm the accuracy of system identification, remains to be elucidated. Moreover, despite theoretical concerns over closed-loop system identification, the accuracy and the predictability of the closed-loop spontaneous baroreflex transfer function have not been evaluated compared with the open-loop transfer function. Using urethane and α-chloralose anaesthetized, vagotomized and aortic-denervated rabbits (n = 10), we identified open-loop baroreflex transfer functions by recording renal sympathetic nerve activity (SNA) while varying the vascularly isolated intracarotid sinus pressure (CSP) according to a binary random (white-noise) sequence (operating pressure ± 20 mmHg), and using a simplified equation to calculate closed-loop-spontaneous baroreflex transfer function while matching CSP with systemic arterial pressure (AP). Our results showed that the open-loop baroreflex transfer functions for the neural and peripheral arcs predicted the time-series SNA and AP outputs from measured CSP and SNA inputs, with r2 of 0.8 ± 0.1 and 0.8 ± 0.1, respectively. In contrast, the closed-loop-spontaneous baroreflex transfer function for the neural arc was markedly different from the open-loop transfer function (enhanced gain increase and a phase lead), and did not predict the time-series SNA dynamics (r2; 0.1 ± 0.1). However, the closed-loop-spontaneous baroreflex transfer function of the peripheral arc partially matched the open-loop transfer function in gain and phase functions, and had limited but reasonable predictability of the time-series AP dynamics (r2, 0.7 ± 0.1). A numerical simulation suggested that a noise predominantly in the neural arc under resting conditions might be a possible mechanism responsible for our findings

  17. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  18. Is measurement of cervical length an accurate predictive tool in women with a history of preterm delivery who present with threatened preterm labor?

    Science.gov (United States)

    Melamed, N; Hiersch, L; Meizner, I; Bardin, R; Wiznitzer, A; Yogev, Y

    2014-12-01

    To determine whether sonographically measured cervical length is an effective predictive tool in women with threatened preterm labor and a history of past spontaneous preterm delivery. This was a retrospective cohort study of all women with singleton pregnancies who presented with preterm labor at less than 34 + 0 weeks' gestation and underwent sonographic measurement of cervical length in a tertiary medical center between 2007 and 2012. The accuracy of cervical length in predicting preterm delivery was compared between women with and those without a history of spontaneous preterm delivery. Women with risk factors for preterm delivery other than a history of preterm delivery were excluded from both groups. Overall, 1023 women who presented with preterm labor met the study criteria, of whom 136 (13.3%) had a history of preterm delivery (past-PTD group) and 887 (86.7%) had no risk factors for preterm delivery (low-risk group). The rate of preterm delivery was significantly higher for women with a history of preterm delivery (36.8% vs 22.5%; P delivery interval in low-risk women (r = 0.32, P delivery (r = 0.07, P = 0.4). On multivariable analysis, cervical length was independently associated with the risk of preterm delivery for women in the low-risk group but not for women with a history of previous preterm delivery. For women with previous preterm delivery who presented with threatened preterm labor, cervical length failed to distinguish between those who did and those who did not deliver prematurely (area under the receiver-operating characteristics curve range, 0.475-0.506). When using standardized thresholds, the sensitivity and specificity of cervical length for the prediction of preterm delivery were significantly lower in women with previous preterm delivery than in women with no risk factors for preterm delivery. Cervical length appears to be of limited value in the prediction of preterm delivery among women with threatened preterm labor

  19. Pre-procedural renal resistive index accurately predicts contrast-induced acute kidney injury in patients with preserved renal function submitted to coronary angiography.

    Science.gov (United States)

    Wybraniec, Maciej T; Bożentowicz-Wikarek, Maria; Chudek, Jerzy; Mizia-Stec, Katarzyna

    2017-05-01

    The study aimed to evaluate the clinical utility of ultrasonographic intra-renal blood flow parameters, together with the wide range of different risk factors, for the prediction of contrast-induced acute kidney injury (CI-AKI) in patients with preserved renal function, referred for coronary angiography or percutaneous coronary interventions (CA/PCI). This prospective study covered 95 consecutive patients (69.5% men; median age 65 years) subject to elective or urgent CA/PCI. Data regarding 128 peri-procedural variables were collected. Ultrasonographic intra-renal blood flow parameters, including renal resistive index (RRI) and pulsatility index (RPI), were acquired directly before the procedure. CI-AKI was defined as ≥50% relative or ≥0.3 mg/dL absolute increase of serum creatinine 48 h after procedure. CI-AKI was confirmed in nine patients (9.5%). Patients with CI-AKI had higher SYNTAX score (p = 0.0002), higher rate of left main disease (p  0.69 had 78% sensitivity and 81% specificity in CI-AKI prediction. High pre-procedural RRI seems to be a useful novel risk factor for CI-AKI in patients with preserved renal function. Coronary, peripheral and renal vascular pathology contribute to the development of CI-AKI following CA/PCI.

  20. A New Strategy for Accurately Predicting I-V Electrical Characteristics of PV Modules Using a Nonlinear Five-Point Model

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents the modelling of electrical I-V response of illuminated photovoltaic crystalline modules. As an alternative method to the linear five-parameter model, our strategy uses advantages of a nonlinear analytical five-point model to take into account the effects of nonlinear variations of current with respect to solar irradiance and of voltage with respect to cells temperature. We succeeded in this work to predict with great accuracy the I-V characteristics of monocrystalline shell SP75 and polycrystalline GESOLAR GE-P70 photovoltaic modules. The good comparison of our calculated results to experimental data provided by the modules manufacturers makes it possible to appreciate the contribution of taking into account the nonlinear effect of operating conditions data on I-V characteristics of photovoltaic modules.

  1. Cervical assessment at 22 and 27 weeks for the prediction of spontaneous birth before 34 weeks in twin pregnancies: is transvaginal sonography more accurate than digital examination?

    Science.gov (United States)

    Vayssière, C; Favre, R; Audibert, F; Chauvet, M P; Gaucherand, P; Tardif, D; Grangé, G; Novoa, A; Descamps, P; Perdu, M; Andrini, E; Janse-Marec, J; Maillard, F; Nisand, I

    2005-12-01

    This study compared the accuracy of ultrasound cervical assessment (cervical length and cervical index) and digital examination (Bishop score and cervical score) in the prediction of spontaneous birth before 34 weeks in twin pregnancies. In a prospective multicenter study, digital examination and transvaginal sonography were performed consecutively in twin pregnancies attending for routine sonography at either 22 weeks (175 women) or 27 weeks (153 women). The digital examination took place first, and the Bishop score and cervical score (cervical length minus cervical dilatation) were calculated. Ultrasound measurements were then made of cervical length and funnel length to yield the cervical index (1 + funnel length/cervical length). The association between each variable and delivery before 34 weeks was tested by the Mann-Whitney U-test. The receiver-operating characteristics (ROC) curves of the ultrasound and digital indicators were determined for both gestational age periods, and the areas under the ROC curves compared. The best cut-off values for each indicator were used to determine predictive values for delivery before 34 weeks. The median gestational age at delivery among the women included in the 22-week examination period was 36.0 (range, 21-40) weeks; 10.9% (19) gave birth spontaneously before 34 weeks. The median cervical length was 40 (range, 6-65) mm. All four parameters were predictors of delivery before 34 weeks. The areas under the ROC curves for cervical index, cervical length, Bishop score and cervical score did not differ significantly. The median gestational age at delivery among the women in the 27-week examination period was 36.0 (range, 27-40) weeks; 9.2% (14) gave birth spontaneously before 34 weeks. The median cervical length was 35 (range, 1-57) mm. All parameters except the Bishop score were predictors of delivery before 34 weeks. The likelihood ratio of the positive and negative tests for cervical length digital examination at the 27-week

  2. Skinfold Prediction Equations Fail to Provide an Accurate Estimate of Body Composition in Elite Rugby Union Athletes of Caucasian and Polynesian Ethnicity.

    Science.gov (United States)

    Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J

    2018-01-01

    Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.

  3. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, K; Bostani, M; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States); McCollough, C [Mayo Clinic, Rochester, MN (United States)

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not

  4. N0/N1, PNL, or LNR? The effect of lymph node number on accurate survival prediction in pancreatic ductal adenocarcinoma.

    Science.gov (United States)

    Valsangkar, Nakul P; Bush, Devon M; Michaelson, James S; Ferrone, Cristina R; Wargo, Jennifer A; Lillemoe, Keith D; Fernández-del Castillo, Carlos; Warshaw, Andrew L; Thayer, Sarah P

    2013-02-01

    We evaluated the prognostic accuracy of LN variables (N0/N1), numbers of positive lymph nodes (PLN), and lymph node ratio (LNR) in the context of the total number of examined lymph nodes (ELN). Patients from SEER and a single institution (MGH) were reviewed and survival analyses performed in subgroups based on numbers of ELN to calculate excess risk of death (hazard ratio, HR). In SEER and MGH, higher numbers of ELN improved the overall survival for N0 patients. The prognostic significance (N0/N1) and PLN were too variable as the importance of a single PLN depended on the total number of LN dissected. LNR consistently correlated with survival once a certain number of lymph nodes were dissected (≥13 in SEER and ≥17 in the MGH dataset). Better survival for N0 patients with increasing ELN likely represents improved staging. PLN have some predictive value but the ELN strongly influence their impact on survival, suggesting the need for a ratio-based classification. LNR strongly correlates with outcome provided that a certain number of lymph nodes is evaluated, suggesting that the prognostic accuracy of any LN variable depends on the total number of ELN.

  5. Genome-enabled selection doubles the accuracy of predicted breeding values for bacterial cold water disease resistance compared to traditional family-based selection in rainbow trout aquaculture

    Science.gov (United States)

    We have shown previously that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  6. Accurate sperm morphology assessment predicts sperm function.

    Science.gov (United States)

    Abu Hassan Abu, D; Franken, D R; Hoffman, B; Henkel, R

    2012-05-01

    Sperm morphology has been associated with in vitro as well as in vivo fertilisation. The study aimed to evaluate the possible relation between the percentage of spermatozoa with normal morphology and the following sperm functional assays: (i) zona-induced acrosome reaction (ZIAR); (ii) DNA integrity; (iii) chromatin condensation; (iv) sperm apoptosis; and (v) fertilisation rates. Regression analysis was employed to calculate the association between morphology and different functional tests. Normal sperm morphology correlated significantly with the percentages of live acrosome-reacted spermatozoa in the ZIAR (r = 0.518; P sperm apoptosis (r = -0.395; P = 0.0206; n = 34) and necrosis (r = -0.545; P = 0.0009; n = 34). Negative correlations existed between for the acrosome reaction, and DNA integrity, while negative associations were recorded with the percentages of CMA(3) -positive spermatozoa, apoptotic and necrotic spermatozoa. Sperm morphology is related to sperm dysfunction such as poor chromatin condensation, acrosome reaction and DNA integrity. Negative and significant correlations existed between normal sperm morphology and chromatin condensation, the percentage of spermatozoa with abnormal DNA and spermatozoa with apoptotic activity. The authors do not regard sperm morphology as the only test for the diagnosis of male fertility, but sperm morphology can serve as a valuable indicator of underlying dysfunction. © 2011 Blackwell Verlag GmbH.

  7. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  8. Evaluation of genome-enabled selection for bacterial cold water disease resistance using progeny performance data in Rainbow Trout: Insights on genotyping methods and genomic prediction models

    Science.gov (United States)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic br...

  9. The Large-scale Coronal Structure of the 2017 August 21 Great American Eclipse: An Assessment of Solar Surface Flux Transport Model Enabled Predictions and Observations

    Science.gov (United States)

    Nandy, Dibyendu; Bhowmik, Prantika; Yeates, Anthony R.; Panda, Suman; Tarafder, Rajashik; Dash, Soumyaranjan

    2018-01-01

    On 2017 August 21, a total solar eclipse swept across the contiguous United States, providing excellent opportunities for diagnostics of the Sun’s corona. The Sun’s coronal structure is notoriously difficult to observe except during solar eclipses; thus, theoretical models must be relied upon for inferring the underlying magnetic structure of the Sun’s outer atmosphere. These models are necessary for understanding the role of magnetic fields in the heating of the corona to a million degrees and the generation of severe space weather. Here we present a methodology for predicting the structure of the coronal field based on model forward runs of a solar surface flux transport model, whose predicted surface field is utilized to extrapolate future coronal magnetic field structures. This prescription was applied to the 2017 August 21 solar eclipse. A post-eclipse analysis shows good agreement between model simulated and observed coronal structures and their locations on the limb. We demonstrate that slow changes in the Sun’s surface magnetic field distribution driven by long-term flux emergence and its evolution governs large-scale coronal structures with a (plausibly cycle-phase dependent) dynamical memory timescale on the order of a few solar rotations, opening up the possibility for large-scale, global corona predictions at least a month in advance.

  10. Accuracy of prediction scores and novel biomarkers for predicting nonalcoholic fatty liver disease in obese children

    NARCIS (Netherlands)

    Koot, Bart G. P.; van der Baan-Slootweg, Olga H.; Bohte, Anneloes E.; Nederveen, Aart J.; van Werven, Jochem R.; Tamminga-Smeulders, Christine L. J.; Merkus, Maruschka P.; Schaap, Frank G.; Jansen, Peter L. M.; Stoker, Jaap; Benninga, Marc A.

    2013-01-01

    Accurate prediction scores for liver steatosis are demanded to enable clinicians to noninvasively screen for nonalcoholic fatty liver disease (NAFLD). Several prediction scores have been developed, however external validation is lacking. The aim was to determine the diagnostic accuracy of four

  11. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    Science.gov (United States)

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  12. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  13. Accurate vehicle classification including motorcycles using piezoelectric sensors.

    Science.gov (United States)

    2013-03-01

    State and federal departments of transportation are charged with classifying vehicles and monitoring mileage traveled. Accurate data reporting enables suitable roadway design for safety and capacity. Vehicle classifiers currently employ inductive loo...

  14. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  15. Accurate Frequency Determination of Vibration-Rotation and Rotational Transitions of SiH+

    Science.gov (United States)

    Doménech, José L.; Schlemmer, Stephan; Asvany, Oskar

    2017-01-01

    The fundamental 28SiH+ ion has been characterized in a collaborative work, utilizing a hollow-cathode-discharge laser-spectrometer and a cryogenic ion trap spectrometer. Twenty-three vibration-rotation transitions around 4.75 μm have been detected with high accuracy. This has facilitated the first direct measurement of the pure rotational transition J = 1 ← 0 at 453056.3632(4) MHz in the trap spectrometer. The measured and accurately predicted transitions enable the search for this ion in space with IR and sub-mm telescopes. PMID:29142330

  16. The Nordic Housing Enabler

    DEFF Research Database (Denmark)

    Helle, Tina; Slaug, Bjørn; Brandt, Åse

    2010-01-01

    This study addresses development of a content valid cross-Nordic version of the Housing Enabler and investigation of its inter-rater reliability when used in occupational therapy rating situations, involving occupational therapists, clients and their home environments. The instrument was translated...... from the original Swedish version of the Housing Enabler, and adapted according to accessibility norms and guidelines for housing design in Sweden, Denmark, Finland and Iceland. This iterative process involved occupational therapists, architects, building engineers and professional translators......, resulting in the Nordic Housing Enabler. For reliability testing, the sampling strategy and data collection procedures used were the same in all countries. Twenty voluntary occupational therapists, pair-wise but independently from each other, collected data from 106 cases by means of the Nordic Housing...

  17. Organising to Enable Innovation

    DEFF Research Database (Denmark)

    Brink, Tove

    2016-01-01

    . The findings reveal a continous organising process between individual/ team creativity and organisational structures/control to enable innovation at firm level. Organising provides a dynamic approach and contains the integrated reconstruction of creativity, structures and boundaries for enhanced balance......The purpose of this conceptual paper is to reveal how organising can enable innovation across organisational layers and organisational units. This approach calls for a cross-disciplinary literature review. The aim is to provide an integrated understanding of innovation in an organisational approach...... of explorative and exploitative learning in uncertain environments. Shedding light on the cross-disciplinary theories to organise innovation provides a contribution at the firm level to enable innovation....

  18. Nordic Housing Enabler

    DEFF Research Database (Denmark)

    Helle, Tina; Brandt, Åse

    . For reliability testing, the sample strategy and data collection procedures were the same in all countries. In total, twenty voluntary occupational therapists collected data from 106 cases by means of the Nordic Housing Enabler. Inter-rater reliability was calculated by means of percentage agreement and kappa......Development and reliability testing of the Nordic Housing Enabler – an instrument for accessibility assessment of the physical housing. Tina Helle & Åse Brandt University of Lund, Health Sciences, Faculty of Medicine (SE) and University College Northern Jutland, Occupational Therapy department (DK......, however, the built environment shows serious deficits when it comes to accessibility. This study addresses development of a content valid cross-Nordic version of the Housing Enabler and investigation of inter-rater reliability, when used in occupational therapy practice. The instrument was translated from...

  19. The Nordic Housing Enabler

    DEFF Research Database (Denmark)

    Helle, T.; Nygren, C.; Slaug, B.

    2014-01-01

    , resulting in the Nordic Housing Enabler. For reliability testing, the sampling strategy and data collection procedures used were the same in all countries. Twenty voluntary occupational therapists, pair-wise but independently of each other, collected data from 106 cases by means of the Nordic Housing......This study addresses development of a content-valid cross-Nordic version of the Housing Enabler and investigation of its inter-rater reliability when used in occupational therapy rating situations, involving occupational therapists, clients, and their home environments. The instrument...... Enabler. Inter-rater reliability was calculated by means of percentage agreement and kappa statistics. Overall good percentage agreement for the personal and environmental components of the instrument was shown, indicating that the instrument was sufficiently reliable for application in practice...

  20. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In......TraTime allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  1. Pilot project as enabler?

    DEFF Research Database (Denmark)

    Neisig, Margit; Glimø, Helle; Holm, Catrine Granzow

    This article deals with a systemic perspective on transition. The field of study addressed is a pilot project as enabler of transition in a highly complex polycentric context. From a Luhmannian systemic approach, a framework is created to understand and address barriers of change occurred using...... pilot projects as enabler of transition. Aspects of how to create trust and deal with distrust during a transition are addressed. The transition in focus is the concept of New Public Management and how it is applied in the management of the Employment Service in Denmark. The transition regards...

  2. Enabling distributed collaborative science

    DEFF Research Database (Denmark)

    Hudson, T.; Sonnenwald, Diane H.; Maglaughlin, K.

    2000-01-01

    To enable collaboration over distance, a collaborative environment that uses a specialized scientific instrument called a nanoManipulator is evaluated. The nanoManipulator incorporates visualization and force feedback technology to allow scientists to see, feel, and modify biological samples being...

  3. Whole-genome regression and prediction methods applied to plant and animal breeding

    NARCIS (Netherlands)

    Los Campos, De G.; Hickey, J.M.; Pong-Wong, R.; Daetwyler, H.D.; Calus, M.P.L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding, and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of

  4. Pilot project as enabler?

    DEFF Research Database (Denmark)

    Neisig, Margit; Glimø, Helle; Holm, Catrine Granzow

    pilot projects as enabler of transition. Aspects of how to create trust and deal with distrust during a transition are addressed. The transition in focus is the concept of New Public Management and how it is applied in the management of the Employment Service in Denmark. The transition regards......This article deals with a systemic perspective on transition. The field of study addressed is a pilot project as enabler of transition in a highly complex polycentric context. From a Luhmannian systemic approach, a framework is created to understand and address barriers of change occurred using...... the systemic change from a very control based and detailed regulated version of New Public Management towards a system allowing more flexibility and decentralized decision making empowering municipalities as well as employees own decision making...

  5. Nordic Housing Enabler

    DEFF Research Database (Denmark)

    Helle, Tina; Brandt, Åse

    2009-01-01

    Development and reliability testing of the Nordic Housing Enabler – an instrument for accessibility assessment of the physical housing. Tina Helle & Åse Brandt University of Lund, Health Sciences, Faculty of Medicine (SE) and University College Northern Jutland, Occupational Therapy department (DK......). Danish Centre for Assistive Technology. Abstract. For decades, accessibility to the physical housing environment for people with functional limitations has been of interest politically, professionally and for the users. Guidelines and norms on accessible housing design have gradually been developed......, however, the built environment shows serious deficits when it comes to accessibility. This study addresses development of a content valid cross-Nordic version of the Housing Enabler and investigation of inter-rater reliability, when used in occupational therapy practice. The instrument was translated from...

  6. Enabling Wind Power Nationwide

    Energy Technology Data Exchange (ETDEWEB)

    Jose Zayas, Michael Derby, Patrick Gilman and Shreyas Ananthan,

    2015-05-01

    Leveraging this experience, the U.S. Department of Energy’s (DOE’s) Wind and Water Power Technologies Office has evaluated the potential for wind power to generate electricity in all 50 states. This report analyzes and quantifies the geographic expansion that could be enabled by accessing higher above ground heights for wind turbines and considers the means by which this new potential could be responsibly developed.

  7. Assessing the most accurate formula to predict the risk of lymph node metastases from prostate cancer in contemporary patients treated with radical prostatectomy and extended pelvic lymph node dissection

    International Nuclear Information System (INIS)

    Abdollah, Firas; Cozzarini, Cesare; Sun, Maxine; Suardi, Nazareno; Gallina, Andrea; Passoni, Niccolò Maria; Bianchi, Marco; Tutolo, Manuela; Fossati, Nicola; Nini, Alessandro; Dell’Oglio, Paolo; Salonia, Andrea; Karakiewicz, Pierre; Montorsi, Francesco; Briganti, Alberto

    2013-01-01

    Background and purpose: The aim of this study was to perform a head-to-head comparison of the Roach formula vs. two other newly developed prediction tools for lymph node invasion (LNI) in prostate cancer, namely the Nguyen and the Yu formulas. Material and methods: We included 3115 patients treated with radical prostatectomy and extended pelvic lymph node dissection (ePLND), between 2000 and 2010 at a single center. The predictive accuracy of the three formulas was assessed and compared using the area-under-curve (AUC) and calibration methods. Moreover, decision curve analysis compared the net-benefit of the three formulas in a head-to-head fashion. Results: Overall, 10.8% of patients had LNI. The LNI-predicted risk was >15% in 25.5%, 3.4%, and 10.2% of patients according to the Roach, Nguyen and Yu formula, respectively. The AUC was 80.5%, 80.5% and 79%, respectively (all p > 0.05). However, the Roach formula demonstrated more favorable calibration and generated the highest net-benefit relative to the other examined formulas in decision curve analysis. Conclusions: All formulas demonstrated high and comparable discrimination accuracy in predicting LNI, when externally validated on ePLND treated patients. However, the Roach formula showed the most favorable characteristics. Therefore, its use should be preferred over the two other tools

  8. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  9. Collision detection and prediction using a mutual configuration state approach

    NARCIS (Netherlands)

    Schoute, Albert L.; Weiss, N.; Jesse, N.; Reusch, B.

    A configuration state approach is presented that simplifies the mutual collision analysis of objects with known shapes that move along known paths. Accurate and fast prediction of contact situations in games such as robot soccer enables improved anticipatory and corrective actions of the state

  10. EnableATIS strategy assessment.

    Science.gov (United States)

    2014-02-01

    Enabling Advanced Traveler Information Systems (EnableATIS) is the traveler information component of the Dynamic Mobility Application (DMA) program. The objective of : the EnableATIS effort is to foster transformative traveler information application...

  11. Enabling Digital Literacy

    DEFF Research Database (Denmark)

    Ryberg, Thomas; Georgsen, Marianne

    2010-01-01

    There are some tensions between high-level policy definitions of “digital literacy” and actual teaching practice. We need to find workable definitions of digital literacy; obtain a better understanding of what digital literacy might look like in practice; and identify pedagogical approaches, which...... support teachers in designing digital literacy learning. We suggest that frameworks such as Problem Based Learning (PBL) are approaches that enable digital literacy learning because they provide good settings for engaging with digital literacy. We illustrate this through analysis of a case. Furthermore......, these operate on a meso-level mediating between high-level concepts of digital literacy and classroom practice....

  12. Informatics enables public health surveillance

    Directory of Open Access Journals (Sweden)

    Scott J. N McNabb

    2017-01-01

    Full Text Available Over the past decade, the world has radically changed. New advances in information and communication technologies (ICT connect the world in ways never imagined. Public health informatics (PHI leveraged for public health surveillance (PHS, can enable, enhance, and empower essential PHS functions (i.e., detection, reporting, confirmation, analyses, feedback, response. However, the tail doesn't wag the dog; as such, ICT cannot (should not drive public health surveillance strengthening. Rather, ICT can serve PHS to more effectively empower core functions. In this review, we explore promising ICT trends for prevention, detection, and response, laboratory reporting, push notification, analytics, predictive surveillance, and using new data sources, while recognizing that it is the people, politics, and policies that most challenge progress for implementation of solutions.

  13. Smart Grid Enabled EVSE

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2015-01-12

    The combined team of GE Global Research, Federal Express, National Renewable Energy Laboratory, and Consolidated Edison has successfully achieved the established goals contained within the Department of Energy’s Smart Grid Capable Electric Vehicle Supply Equipment funding opportunity. The final program product, shown charging two vehicles in Figure 1, reduces by nearly 50% the total installed system cost of the electric vehicle supply equipment (EVSE) as well as enabling a host of new Smart Grid enabled features. These include bi-directional communications, load control, utility message exchange and transaction management information. Using the new charging system, Utilities or energy service providers will now be able to monitor transportation related electrical loads on their distribution networks, send load control commands or preferences to individual systems, and then see measured responses. Installation owners will be able to authorize usage of the stations, monitor operations, and optimally control their electricity consumption. These features and cost reductions have been developed through a total system design solution.

  14. Detection of Dysferlin Gene Pathogenic Variants in the Indian Population in Patients Predicted to have a Dysferlinopathy Using a Blood-based Monocyte Assay and Clinical Algorithm: A Model for Accurate and Cost-effective Diagnosis.

    Science.gov (United States)

    Dastur, Rashna Sam; Gaitonde, Pradnya Satish; Kachwala, Munira; Nallamilli, Babi R R; Ankala, Arunkanth; Khadilkar, Satish V; Atchayaram, Nalini; Gayathri, N; Meena, A K; Rufibach, Laura; Shira, Sarah; Hegde, Madhuri

    2017-01-01

    Limb-girdle muscular dystrophy (LGMD) is the most common adult-onset class of muscular dystrophies in India, but a majority of suspected LGMDs in India remain unclassified to the genetic subtype level. The next-generation sequencing (NGS)-based approaches have allowed molecular characterization and subtype diagnosis in a majority of these patients in India. (I) To select probable dysferlinopathy (LGMD2B) cases from other LGMD subtypes using two screening methods (i) to determine the status of dysferlin protein expression in blood (peripheral blood mononuclear cell) by monocyte assay (ii) using a predictive algorithm called automated LGMD diagnostic assistant (ALDA) to obtain possible LGMD subtypes based on clinical symptoms. (II) Identification of gene pathogenic variants by NGS for 34 genes associated with LGMD or LGMD like muscular dystrophies, in cases showing: absence of dysferlin protein by the monocyte assay and/or a typical dysferlinopathy phenotype, with medium to high predictive scores using the ALDA tool. Out of the 125 patients screened by NGS, 96 were confirmed with two dysferlin variants, of which 84 were homozygous. Single dysferlin pathogenic variants were seen in 4 patients, whereas 25 showed no variants in the dysferlin gene. In this study, 98.2% of patients with absence of the dysferlin protein showed one or more variants in the dysferlin gene and hence has a high predictive significance in diagnosing dysferlinopathies. However, collection of blood samples from all over India for protein analysis is expensive. Our analysis shows that the use of the "ALDA tool" could be a cost-effective alternative method. Identification of dysferlin pathogenic variants by NGS is the ultimate method for diagnosing dysferlinopathies though follow-up with the monocyte assay can be useful to understand the phenotype in relation to the dysferlin protein expression and also be a useful biomarker for future clinical trials.

  15. Systemic inflammatory response syndrome and model for end-stage liver disease score accurately predict the in-hospital mortality of black African patients with decompensated cirrhosis at initial hospitalization: a retrospective cohort study

    Directory of Open Access Journals (Sweden)

    Mahassadi AK

    2018-04-01

    Full Text Available Alassan Kouamé Mahassadi,1 Justine Laure Konang Nguieguia,1 Henriette Ya Kissi,1 Anthony Afum-Adjei Awuah,2 Aboubacar Demba Bangoura,1 Stanislas Adjeka Doffou,1 Alain Koffi Attia1 1Medicine and Hepatogastroenterology Unit, Centre Hospitalier et Universitaire de Yopougon, Abidjan, Côte d’Ivoire; 2Kumasi Centre for Collaborative Research in Tropical Medicine, Kumasi, Ghana Background: Systemic inflammatory response syndrome (SIRS and model for end-stage liver disease (MELD predict short-term mortality in patients with cirrhosis. Prediction of mortality at initial hospitalization is unknown in black African patients with decompensated cirrhosis.Aim: This study aimed to look at the role of MELD score and SIRS as the predictors of morbidity and mortality at initial hospitalization.Patients and methods: In this retrospective cohort study, we enrolled 159 patients with cirrhosis (median age: 49 years, 70.4% males. The role of Child–Pugh–Turcotte (CPT score, MELD score, and SIRS on mortality was determined by the Kaplan–Meier method, and the prognosis factors were assessed with Cox regression model.Results: At initial hospitalization, 74.2%, 20.1%, and 37.7% of the patients with cirrhosis showed the presence of ascites, hepatorenal syndrome, and esophageal varices, respectively. During the in-hospital follow-up, 40 (25.2% patients died. The overall incidence of mortality was found to be 3.1 [95% confidence interval (CI: 2.2–4.1] per 100 person-days. Survival probabilities were found to be high in case of patients who were SIRS negative (log-rank test= 4.51, p=0.03 and in case of patients with MELD score ≤16 (log-rank test=7.26, p=0.01 compared to the patients who were SIRS positive and those with MELD score >16. Only SIRS (hazard ratio (HR=3.02, [95% CI: 1.4–7.4], p=0.01 and MELD score >16 (HR=2.2, [95% CI: 1.1–4.3], p=0.02 were independent predictors of mortality in multivariate analysis except CPT, which was not relevant in our study

  16. Accurate modeling of parallel scientific computations

    Science.gov (United States)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  17. Spatially enabled land administration

    DEFF Research Database (Denmark)

    Enemark, Stig

    2006-01-01

    enabling of land administration systems managing tenure, valuation, planning, and development will allow the information generated by these activities to be much more useful. Also, the services available to private and public sectors and to community organisations should commensurably improve. Knowledge...... the communication between administrative systems and also establish more reliable data due to the use the original data instead of copies. In Denmark, such governmental guidelines for a service-oriented ITarchitecture in support of e-government are recently adopted. Finally, the paper presents the role of FIG...... in terms of developing relevant land information policies in the Latin American and Caribbean region. The focus is on establishing an awareness of the value of integrating the land administration/cadastre/land registration function with the topographic mapping function. In other words: the value...

  18. Enabling graphene nanoelectronics.

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Wei; Ohta, Taisuke; Biedermann, Laura Butler; Gutierrez, Carlos; Nolen, C. M.; Howell, Stephen Wayne; Beechem Iii, Thomas Edwin; McCarty, Kevin F.; Ross, Anthony Joseph, III

    2011-09-01

    Recent work has shown that graphene, a 2D electronic material amenable to the planar semiconductor fabrication processing, possesses tunable electronic material properties potentially far superior to metals and other standard semiconductors. Despite its phenomenal electronic properties, focused research is still required to develop techniques for depositing and synthesizing graphene over large areas, thereby enabling the reproducible mass-fabrication of graphene-based devices. To address these issues, we combined an array of growth approaches and characterization resources to investigate several innovative and synergistic approaches for the synthesis of high quality graphene films on technologically relevant substrate (SiC and metals). Our work focused on developing the fundamental scientific understanding necessary to generate large-area graphene films that exhibit highly uniform electronic properties and record carrier mobility, as well as developing techniques to transfer graphene onto other substrates.

  19. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  20. Does the Spectrum model accurately predict trends in adult mortality? Evaluation of model estimates using empirical data from a rural HIV community cohort study in north-western Tanzania

    Directory of Open Access Journals (Sweden)

    Denna Michael

    2014-01-01

    Full Text Available Introduction: Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods: Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15, were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005 and the first 5 years afterwards (2005–2009. Results: Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52 in the period 1994–1999, 11.03 (95% CI: 8.84–13.77 in 2000–2004, and 6.22 (95% CI; 4.75–8.15 in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55% in 1994 to 22% (95% CI: 17–29% in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion: From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear.

  1. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  2. Beyond classification: gene-family phylogenies from shotgun metagenomic reads enable accurate community analysis.

    Science.gov (United States)

    Riesenfeld, Samantha J; Pollard, Katherine S

    2013-06-22

    Sequence-based phylogenetic trees are a well-established tool for characterizing diversity of both macroorganisms and microorganisms. Phylogenetic methods have recently been applied to shotgun metagenomic data from microbial communities, particularly with the aim of classifying reads. But the accuracy of gene-family phylogenies that characterize evolutionary relationships among short, non-overlapping sequencing reads has not been thoroughly evaluated. To quantify errors in metagenomic read trees, we developed MetaPASSAGE, a software pipeline to generate in silico bacterial communities, simulate a sample of shotgun reads from a gene family represented in the community, orient or translate reads, and produce a profile-based alignment of the reads from which a gene-family phylogenetic tree can be built. We applied MetaPASSAGE to a variety of RNA and protein-coding gene families, built trees using a range of different phylogenetic methods, and compared the resulting trees using topological and branch-length error metrics. We identified read length as one of the major sources of error. Because phylogenetic methods use a reference database of full-length sequences from the gene family to guide construction of alignments and trees, we found that error can also be substantially reduced through increasing the size and diversity of the reference database. Finally, UniFrac analysis, which compares metagenomic samples based on a summary statistic computed over all branches in a read tree, is very robust to the level of error we observe. Bacterial community diversity can be quantified using phylogenetic approaches applied to shotgun metagenomic data. As sequencing reads get longer and more genomes across the bacterial tree of life are sequenced, the accuracy of this approach will continue to improve, opening the door to more applications.

  3. Biometric Fingerprint System to Enable Rapid and Accurate Identification of Beneficiaries

    OpenAIRE

    Storisteanu, Daniel Matthew L; Norman, Toby L; Grigore, Alexandra; Norman, Tristram L

    2015-01-01

    Inability to uniquely identify clients impedes access to services and contributes to inefficiencies. Using a pocket-sized fingerprint scanner that wirelessly syncs with a health worker's smartphone, the SimPrints biometric system can link individuals' fingerprints to their health records. A pilot in Bangladesh will assess its potential.

  4. Neonatal tolerance induction enables accurate evaluation of gene therapy for MPS I in a canine model.

    Science.gov (United States)

    Hinderer, Christian; Bell, Peter; Louboutin, Jean-Pierre; Katz, Nathan; Zhu, Yanqing; Lin, Gloria; Choa, Ruth; Bagel, Jessica; O'Donnell, Patricia; Fitzgerald, Caitlin A; Langan, Therese; Wang, Ping; Casal, Margret L; Haskins, Mark E; Wilson, James M

    2016-09-01

    High fidelity animal models of human disease are essential for preclinical evaluation of novel gene and protein therapeutics. However, these studies can be complicated by exaggerated immune responses against the human transgene. Here we demonstrate that dogs with a genetic deficiency of the enzyme α-l-iduronidase (IDUA), a model of the lysosomal storage disease mucopolysaccharidosis type I (MPS I), can be rendered immunologically tolerant to human IDUA through neonatal exposure to the enzyme. Using MPS I dogs tolerized to human IDUA as neonates, we evaluated intrathecal delivery of an adeno-associated virus serotype 9 vector expressing human IDUA as a therapy for the central nervous system manifestations of MPS I. These studies established the efficacy of the human vector in the canine model, and allowed for estimation of the minimum effective dose, providing key information for the design of first-in-human trials. This approach can facilitate evaluation of human therapeutics in relevant animal models, and may also have clinical applications for the prevention of immune responses to gene and protein replacement therapies. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Highly accurate sequence imputation enables precise QTL mapping in Brown Swiss cattle.

    Science.gov (United States)

    Frischknecht, Mirjam; Pausch, Hubert; Bapst, Beat; Signer-Hasler, Heidi; Flury, Christine; Garrick, Dorian; Stricker, Christian; Fries, Ruedi; Gredler-Grandl, Birgit

    2017-12-29

    Within the last few years a large amount of genomic information has become available in cattle. Densities of genomic information vary from a few thousand variants up to whole genome sequence information. In order to combine genomic information from different sources and infer genotypes for a common set of variants, genotype imputation is required. In this study we evaluated the accuracy of imputation from high density chips to whole genome sequence data in Brown Swiss cattle. Using four popular imputation programs (Beagle, FImpute, Impute2, Minimac) and various compositions of reference panels, the accuracy of the imputed sequence variant genotypes was high and differences between the programs and scenarios were small. We imputed sequence variant genotypes for more than 1600 Brown Swiss bulls and performed genome-wide association studies for milk fat percentage at two stages of lactation. We found one and three quantitative trait loci for early and late lactation fat content, respectively. Known causal variants that were imputed from the sequenced reference panel were among the most significantly associated variants of the genome-wide association study. Our study demonstrates that whole-genome sequence information can be imputed at high accuracy in cattle populations. Using imputed sequence variant genotypes in genome-wide association studies may facilitate causal variant detection.

  6. Numerical Investigation of a Novel Wiring Scheme Enabling Simple and Accurate Impedance Cytometry

    Directory of Open Access Journals (Sweden)

    Federica Caselli

    2017-09-01

    Full Text Available Microfluidic impedance cytometry is a label-free approach for high-throughput analysis of particles and cells. It is based on the characterization of the dielectric properties of single particles as they flow through a microchannel with integrated electrodes. However, the measured signal depends not only on the intrinsic particle properties, but also on the particle trajectory through the measuring region, thus challenging the resolution and accuracy of the technique. In this work we show via simulation that this issue can be overcome without resorting to particle focusing, by means of a straightforward modification of the wiring scheme for the most typical and widely used microfluidic impedance chip.

  7. Enabling distributed petascale science

    International Nuclear Information System (INIS)

    Baranovski, Andrew; Bharathi, Shishir; Bresnahan, John

    2007-01-01

    Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science

  8. Enabling immersive simulation.

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Josh (University of California Santa Cruz, Santa Cruz, CA); Mateas, Michael (University of California Santa Cruz, Santa Cruz, CA); Hart, Derek H.; Whetzel, Jonathan; Basilico, Justin Derrick; Glickman, Matthew R.; Abbott, Robert G.

    2009-02-01

    The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

  9. Grid-Enabled Measures

    Science.gov (United States)

    Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha

    2011-01-01

    Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586

  10. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  11. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  12. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  13. FOILFEST :community enabled security.

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Judy Hennessey; Johnson, Curtis Martin; Whitley, John B.; Drayer, Darryl Donald; Cummings, John C., Jr. (.,; .)

    2005-09-01

    The Advanced Concepts Group of Sandia National Laboratories hosted a workshop, ''FOILFest: Community Enabled Security'', on July 18-21, 2005, in Albuquerque, NM. This was a far-reaching look into the future of physical protection consisting of a series of structured brainstorming sessions focused on preventing and foiling attacks on public places and soft targets such as airports, shopping malls, hotels, and public events. These facilities are difficult to protect using traditional security devices since they could easily be pushed out of business through the addition of arduous and expensive security measures. The idea behind this Fest was to explore how the public, which is vital to the function of these institutions, can be leveraged as part of a physical protection system. The workshop considered procedures, space design, and approaches for building community through technology. The workshop explored ways to make the ''good guys'' in public places feel safe and be vigilant while making potential perpetrators of harm feel exposed and convinced that they will not succeed. Participants in the Fest included operators of public places, social scientists, technology experts, representatives of government agencies including DHS and the intelligence community, writers and media experts. Many innovative ideas were explored during the fest with most of the time spent on airports, including consideration of the local airport, the Albuquerque Sunport. Some provocative ideas included: (1) sniffers installed in passage areas like revolving door, escalators, (2) a ''jumbotron'' showing current camera shots in the public space, (3) transparent portal screeners allowing viewing of the screening, (4) a layered open/funnel/open/funnel design where open spaces are used to encourage a sense of ''communitas'' and take advantage of citizen ''sensing'' and funnels are technological

  14. Accurate computation of Mathieu functions

    CERN Document Server

    Bibby, Malcolm M

    2013-01-01

    This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach

  15. How accurate is in vitro prediction of carcinogenicity?

    Science.gov (United States)

    Walmsley, Richard Maurice; Billinton, Nicholas

    2011-01-01

    Positive genetic toxicity data suggest carcinogenic hazard, and this can stop a candidate pharmaceutical reaching the clinic. However, during the last decade, it has become clear that many non-carcinogens produce misleading positive results in one or other of the regulatory genotoxicity assays. These doubtful conclusions cost a lot of time and money, as they trigger additional testing of apparently genotoxic candidates, both in vitro and in animals, to discover whether the suggested hazard is genuine. This in turn means that clinical trials can be put on hold. This review describes the current approaches to the ‘misleading positive’ problem as well as efforts to reduce the use of animals in genotoxicity assessment. The following issues are then addressed: the application of genotoxicity testing screens earlier in development; the search for new or improved in vitro genotoxicity tests; proposed changes to the International Committee on Harmonisation guidance on genotoxicity testing [S2(R1)]. Together, developments in all these areas offer good prospects of a more rapid and cost-effective way to understand genetic toxicity concerns. PMID:21091657

  16. Ethics and epistemology of accurate prediction in clinical research.

    Science.gov (United States)

    Hey, Spencer Phillips

    2015-07-01

    All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with the principles of ethical research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Towards Accurate Prediction of Protonation Equilibrium of Nucleic Acids

    OpenAIRE

    Goh, Garrett B.; Knight, Jennifer L.; Brooks, Charles L.

    2013-01-01

    The role of protonated nucleotides in modulating the pH-dependent properties of nucleic acids is one of the emerging frontiers in the field of nucleic acid biology. The recent development of a constant pH molecular dynamics simulation (CPHMDMSλD) framework for simulating nucleic acids has provided a tool for realistic simulations of pH-dependent dynamics. We enhanced the CPHMDMSλD framework with pH-based replica exchange (pH-REX), which significantly improves the sampling of both titration an...

  18. WGS accurately predicts antimicrobial resistance in Escherichia coli

    Science.gov (United States)

    Objectives: To determine the effectiveness of whole-genome sequencing (WGS) in identifying resistance genotypes of multidrug-resistant Escherichia coli (E. coli) and whether these correlate with observed phenotypes. Methods: Seventy-six E. coli strains were isolated from farm cattle and measured f...

  19. How accurate is in vitro prediction of carcinogenicity?

    Science.gov (United States)

    Walmsley, Richard Maurice; Billinton, Nicholas

    2011-03-01

    Positive genetic toxicity data suggest carcinogenic hazard, and this can stop a candidate pharmaceutical reaching the clinic. However, during the last decade, it has become clear that many non-carcinogens produce misleading positive results in one or other of the regulatory genotoxicity assays. These doubtful conclusions cost a lot of time and money, as they trigger additional testing of apparently genotoxic candidates, both in vitro and in animals, to discover whether the suggested hazard is genuine. This in turn means that clinical trials can be put on hold. This review describes the current approaches to the 'misleading positive' problem as well as efforts to reduce the use of animals in genotoxicity assessment. The following issues are then addressed: the application of genotoxicity testing screens earlier in development; the search for new or improved in vitro genotoxicity tests; proposed changes to the International Committee on Harmonisation guidance on genotoxicity testing [S2(R1)]. Together, developments in all these areas offer good prospects of a more rapid and cost-effective way to understand genetic toxicity concerns. © 2011 The Authors. British Journal of Pharmacology © 2011 The British Pharmacological Society.

  20. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  1. Adaptive vehicle motion estimation and prediction

    Science.gov (United States)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  2. Accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes and pure theoretical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Jinhua; Fu, Qingshan; Xue, Yongqiang, E-mail: xyqlw@126.com; Cui, Zixiang

    2017-05-01

    Based on the surface pre-melting model, accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes (tetrahedron, cube, octahedron, dodecahedron, icosahedron, nanowire) were derived. The theoretically calculated melting temperatures are in relative good agreements with experimental, molecular dynamic simulation and other theoretical results for nanometer Au, Ag, Al, In and Pb. It is found that the particle size and shape have notable effects on the melting temperature of nanocrystals, and the smaller the particle size, the greater the effect of shape. Furthermore, at the same equivalent radius, the more the shape deviates from sphere, the lower the melting temperature is. The value of melting temperature depression of cylindrical nanowire is just half of that of spherical nanoparticle with an identical radius. The theoretical relations enable one to quantitatively describe the influence regularities of size and shape on the melting temperature and to provide an effective way to predict and interpret the melting temperature of nanocrystals with different sizes and shapes. - Highlights: • Accurate relations of T{sub m} of nanocrystals with various shapes are derived. • Calculated T{sub m} agree with literature results for nano Au, Ag, Al, In and Pb. • ΔT{sub m} (nanowire) = 0.5ΔT{sub m} (spherical nanocrystal). • The relations apply to predict and interpret the melting behaviors of nanocrystals.

  3. SPEX: a highly accurate spectropolarimeter for atmospheric aerosol characterization

    Science.gov (United States)

    Rietjens, J. H. H.; Smit, J. M.; di Noia, A.; Hasekamp, O. P.; van Harten, G.; Snik, F.; Keller, C. U.

    2017-11-01

    Global characterization of atmospheric aerosol in terms of the microphysical properties of the particles is essential for understanding the role aerosols in Earth climate [1]. For more accurate predictions of future climate the uncertainties of the net radiative forcing of aerosols in the Earth's atmosphere must be reduced [2]. Essential parameters that are needed as input in climate models are not only the aerosol optical thickness (AOT), but also particle specific properties such as the aerosol mean size, the single scattering albedo (SSA) and the complex refractive index. The latter can be used to discriminate between absorbing and non-absorbing aerosol types, and between natural and anthropogenic aerosol. Classification of aerosol types is also very important for air-quality and health-related issues [3]. Remote sensing from an orbiting satellite platform is the only way to globally characterize atmospheric aerosol at a relevant timescale of 1 day [4]. One of the few methods that can be employed for measuring the microphysical properties of aerosols is to observe both radiance and degree of linear polarization of sunlight scattered in the Earth atmosphere under different viewing directions [5][6][7]. The requirement on the absolute accuracy of the degree of linear polarization PL is very stringent: the absolute error in PL must be smaller then 0.001+0.005.PL in order to retrieve aerosol parameters with sufficient accuracy to advance climate modelling and to enable discrimination of aerosol types based on their refractive index for air-quality studies [6][7]. In this paper we present the SPEX instrument, which is a multi-angle spectropolarimeter that can comply with the polarimetric accuracy needed for characterizing aerosols in the Earth's atmosphere. We describe the implementation of spectral polarization modulation in a prototype instrument of SPEX and show results of ground based measurements from which aerosol microphysical properties are retrieved.

  4. Geo-Enabled, Mobile Services

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard

    2006-01-01

    We are witnessing the emergence of a global infrastructure that enables the widespread deployment of geo-enabled, mobile services in practice. At the same time, the research community has also paid increasing attention to data management aspects of mobile services. This paper offers me...

  5. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  6. A stiffly accurate integrator for elastodynamic problems

    KAUST Repository

    Michels, Dominik L.

    2017-07-21

    We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.

  7. Toward genome-enabled mycology.

    Science.gov (United States)

    Hibbett, David S; Stajich, Jason E; Spatafora, Joseph W

    2013-01-01

    Genome-enabled mycology is a rapidly expanding field that is characterized by the pervasive use of genome-scale data and associated computational tools in all aspects of fungal biology. Genome-enabled mycology is integrative and often requires teams of researchers with diverse skills in organismal mycology, bioinformatics and molecular biology. This issue of Mycologia presents the first complete fungal genomes in the history of the journal, reflecting the ongoing transformation of mycology into a genome-enabled science. Here, we consider the prospects for genome-enabled mycology and the technical and social challenges that will need to be overcome to grow the database of complete fungal genomes and enable all fungal biologists to make use of the new data.

  8. Effective and Accurate Colormap Selection

    Science.gov (United States)

    Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.

    2016-12-01

    Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.

  9. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  10. How GNSS Enables Precision Farming

    Science.gov (United States)

    2014-12-01

    Precision farming: Feeding a Growing Population Enables Those Who Feed the World. Immediate and Ongoing Needs - population growth (more to feed) - urbanization (decrease in arable land) Double food production by 2050 to meet world demand. To meet thi...

  11. How accurate are electronic structure methods for actinoid chemistry?

    NARCIS (Netherlands)

    Averkiev, Boris B.; Mantina, Manjeera; Valero, Rosendo; Infante, Ivan; Kovacs, Attila; Truhlar, Donald G.; Gagliardi, Laura

    The CASPT2, CCSD, and CCSD(T) levels of wave function theory and seven density functionals were tested against experiment for predicting the ionization potentials and bond dissociation energies of actinoid monoxides and dioxides with their cations. The goal is to guide future work by enabling the

  12. Predictive and Stochastic Approach for Software Effort Estimation

    OpenAIRE

    Srinivasa Rao T.; Hari CH.V.M.K.; Prasad Reddy P.V.G.D

    2013-01-01

    Software cost Estimation is the process of predicting the amount of time (Effort) required to build a software system. The primary reason for cost estimation is to enable the client or the developer to perform a cost-benefit analysis. Effort Estimations are determined in terms of person-months, which can be translated into actual dollar cost. The accuracy of the estimate will be depending on the amount of accurate information of the final product. Specification with uncertainty represents a r...

  13. Accurate adiabatic correction in the hydrogen molecule

    Energy Technology Data Exchange (ETDEWEB)

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  14. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  15. OGC® Sensor Web Enablement Standards

    Directory of Open Access Journals (Sweden)

    George Percivall

    2006-09-01

    Full Text Available This article provides a high-level overview of and architecture for the Open Geospatial Consortium (OGC standards activities that focus on sensors, sensor networks, and a concept called the “Sensor Web”. This OGC work area is known as Sensor Web Enablement (SWE. This article has been condensed from "OGC® Sensor Web Enablement: Overview And High Level Architecture," an OGC White Paper by Mike Botts, PhD, George Percivall, Carl Reed, PhD, and John Davidson which can be downloaded from http://www.opengeospatial.org/pt/15540. Readers interested in greater technical and architecture detail can download and read the OGC SWE Architecture Discussion Paper titled “The OGC Sensor Web Enablement Architecture” (OGC document 06-021r1, http://www.opengeospatial.org/pt/14140.

  16. Core-Level Modeling and Frequency Prediction for DSP Applications on FPGAs

    Directory of Open Access Journals (Sweden)

    Gongyu Wang

    2015-01-01

    Full Text Available Field-programmable gate arrays (FPGAs provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area, and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL, significantly reducing the total design time.

  17. Genetic programming and cae neural networks approach for prediction of the bending capability of ZnTiCu sheets

    OpenAIRE

    Turk, R.; Peruš, I.; Kovačič, M.; Kugler, G.; Terčelj, M.

    2008-01-01

    Genetic programming (GP) and CAE NN analysis have been applied for the prediction of bending capability of rolled ZnTiCu alloy sheet. Investigation revealed that an analysis with CAE NN is faster than GP but less accurate for lower amount of data. Both methods enable good assessment of separate influencing parameters in the complex system.

  18. FIXED-WING MICRO AERIAL VEHICLE FOR ACCURATE CORRIDOR MAPPING

    Directory of Open Access Journals (Sweden)

    M. Rehak

    2015-08-01

    Full Text Available In this study we present a Micro Aerial Vehicle (MAV equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  19. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  20. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    Science.gov (United States)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  1. Pseudospectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with particle-in-cell codes

    Science.gov (United States)

    Blaclard, G.; Vincenti, H.; Lehe, R.; Vay, J. L.

    2017-09-01

    With the advent of petawatt class lasers, the very large laser intensities attainable on target should enable the production of intense high-order Doppler harmonics from relativistic laser-plasma mirror interactions. At present, the modeling of these harmonics with particle-in-cell (PIC) codes is extremely challenging as it implies an accurate description of tens to hundreds of harmonic orders on a broad range of angles. In particular, we show here that due to the numerical dispersion of waves they induce in vacuum, standard finite difference time domain (FDTD) Maxwell solvers employed in most PIC codes can induce a spurious angular deviation of harmonic beams potentially degrading simulation results. This effect was extensively studied and a simple toy model based on the Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatiotemporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the mitigation of this numerical artifact with FDTD solvers mandates very high spatiotemporal resolution preventing realistic three-dimensional (3D) simulations even on the largest computers available at the time of writing. We finally show that nondispersive pseudospectral analytical time domain solvers can considerably reduce the spatiotemporal resolution required to mitigate this spurious deviation and should enable in the near future 3D accurate modeling on supercomputers in a realistic time to solution.

  2. Echo-Enabled Harmonic Generation

    Energy Technology Data Exchange (ETDEWEB)

    Stupakov, Gennady; /SLAC

    2012-06-28

    A recently proposed concept of the Echo-Enabled Harmonic Generation (EEHG) FEL uses two laser modulators in combination with two dispersion sections to generate a high-harmonic density modulation in a relativistic beam. This seeding technique holds promise of a one-stage soft x-ray FEL that radiates not only transversely but also longitudinally coherent pulses. Currently, an experimental verification of the concept is being conducted at the SLAC National Accelerator Laboratory aimed at the demonstration of the EEHG.

  3. Prediction of Unsteady Transonic Aerodynamics, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — An accurate prediction of aero-elastic effects depends on an accurate prediction of the unsteady aerodynamic forces. Perhaps the most difficult speed regime is...

  4. 'Ethos' Enabling Organisational Knowledge Creation

    Science.gov (United States)

    Matsudaira, Yoshito

    This paper examines knowledge creation in relation to improvements on the production line in the manufacturing department of Nissan Motor Company and aims to clarify embodied knowledge observed in the actions of organisational members who enable knowledge creation will be clarified. For that purpose, this study adopts an approach that adds a first, second, and third-person's viewpoint to the theory of knowledge creation. Embodied knowledge, observed in the actions of organisational members who enable knowledge creation, is the continued practice of 'ethos' (in Greek) founded in Nissan Production Way as an ethical basis. Ethos is knowledge (intangible) assets for knowledge creating companies. Substantiated analysis classifies ethos into three categories: the individual, team and organisation. This indicates the precise actions of the organisational members in each category during the knowledge creation process. This research will be successful in its role of showing the indispensability of ethos - the new concept of knowledge assets, which enables knowledge creation -for future knowledge-based management in the knowledge society.

  5. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  6. Accuracy of prediction scores and novel biomarkers for predicting nonalcoholic fatty liver disease in obese children.

    Science.gov (United States)

    Koot, Bart G P; van der Baan-Slootweg, Olga H; Bohte, Anneloes E; Nederveen, Aart J; van Werven, Jochem R; Tamminga-Smeulders, Christine L J; Merkus, Maruschka P; Schaap, Frank G; Jansen, Peter L M; Stoker, Jaap; Benninga, Marc A

    2013-03-01

    Accurate prediction scores for liver steatosis are demanded to enable clinicians to noninvasively screen for nonalcoholic fatty liver disease (NAFLD). Several prediction scores have been developed, however external validation is lacking. The aim was to determine the diagnostic accuracy of four existing prediction scores in severely obese children, to develop a new prediction score using novel biomarkers and to compare these results to the performance of ultrasonography. Liver steatosis was measured using proton magnetic resonance spectroscopy in 119 severely obese children (mean age 14.3 ± 2.1 years, BMI z-score 3.35 ± 0.35). Prevalence of steatosis was 47%. The four existing predictions scores ("NAFLD liver fat score," "fatty liver index," "hepatic steatosis index," and the pediatric prediction score) had only moderate diagnostic accuracy in this cohort (positive predictive value (PPV): 70, 61, 61, 69% and negative predictive value (NPV) 77, 69, 68, 75%, respectively). A new prediction score was built using anthropometry, routine biochemistry and novel biomarkers (leptin, adiponectin, TNF-alpha, IL-6, CK-18, FGF-21, and adiponutrin polymorphisms). The final model included ALT, HOMA, sex, and leptin. This equation (PPV 79% and NPV 80%) did not perform substantially better than the four other equations and did not outperform ultrasonography for excluding NAFLD (NPV 82%). The conclusion is in severely obese children and adolescents existing prediction scores and the tested novel biomarkers have insufficient diagnostic accuracy for diagnosing or excluding NAFLD. Copyright © 2012 The Obesity Society.

  7. Molecular dynamics simulations and docking enable to explore the biophysical factors controlling the yields of engineered nanobodies

    Science.gov (United States)

    Soler, Miguel A.; De Marco, Ario; Fortuna, Sara

    2016-10-01

    Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.

  8. Precise positioning with sparse radio tracking: How LRO-LOLA and GRAIL enable future lunar exploration

    Science.gov (United States)

    Mazarico, E.; Goossens, S. J.; Barker, M. K.; Neumann, G. A.; Zuber, M. T.; Smith, D. E.

    2017-12-01

    Two recent NASA missions to the Moon, the Lunar Reconnaissance Orbiter (LRO) and the Gravity Recovery and Interior Laboratory (GRAIL), have obtained highly accurate information about the lunar shape and gravity field. These global geodetic datasets resolve long-standing issues with mission planning; the tidal lock of the Moon long prevented collection of accurate gravity measurements over the farside, and deteriorated precise positioning of topographic data. We describe key datasets and results from the LRO and GRAIL mission that are directly relevant to future lunar missions. SmallSat and CubeSat missions especially would benefit from these recent improvements, as they are typically more resource-constrained. Even with limited radio tracking data, accurate knowledge of topography and gravity enables precise orbit determination (OD) (e.g., limiting the scope of geolocation and co-registration tasks) and long-term predictions of altitude (e.g., dramatically reducing uncertainties in impact time). With one S-band tracking pass per day, LRO OD now routinely achieves total position knowledge better than 10 meters and radial position knowledge around 0.5 meter. Other tracking data, such as Laser Ranging from Earth-based SLR stations, can further support OD. We also show how altimetry can be used to substantially improve orbit reconstruction with the accurate topographic maps now available from Lunar Orbiter Laser Altimeter (LOLA) data. We present new results with SELENE extended mission and LRO orbits processed with direct altimetry measurements. With even a simple laser altimeter onboard, high-quality OD can be achieved for future missions because of the datasets acquired by LRO and GRAIL, without the need for regular radio contact. Onboard processing of altimetric ranges would bring high-quality real-time position knowledge to support autonomous operation. We also describe why optical ranging transponders are ideal payloads for future lunar missions, as they can

  9. Atomistically enabled nonsingular anisotropic elastic representation of near-core dislocation stress fields in α -iron

    Science.gov (United States)

    Seif, Dariush; Po, Giacomo; Mrovec, Matous; Lazar, Markus; Elsässer, Christian; Gumbsch, Peter

    2015-05-01

    The stress fields of dislocations predicted by classical elasticity are known to be unrealistically large approaching the dislocation core, due to the singular nature of the theory. While in many cases this is remedied with the approximation of an effective core radius, inside which ad hoc regularizations are implemented, such approximations lead to a compromise in the accuracy of the calculations. In this work an anisotropic nonsingular elastic representation of dislocation fields is developed to accurately represent the near-core stresses of dislocations in α -iron. The regularized stress field is enabled through the use of a nonsingular Green's tensor function of Helmholtz-type gradient anisotropic elasticity, which requires only a single characteristic length parameter in addition to the material's elastic constants. Using a magnetic bond-order potential to model atomic interactions in iron, molecular statics calculations are performed, and an optimization procedure is developed to extract the required length parameter. Results show the method can accurately replicate the magnitude and decay of the near-core dislocation stresses even for atoms belonging to the core itself. Comparisons with the singular isotropic and anisotropic theories show the nonsingular anisotropic theory leads to a substantially more accurate representation of the stresses of both screw and edge dislocations near the core, in some cases showing improvements in accuracy of up to an order of magnitude. The spatial extent of the region in which the singular and nonsingular stress differ substantially is also discussed. The general procedure we describe may in principle be applied to accurately model the near-core dislocation stresses of any arbitrarily shaped dislocation in anisotropic cubic media.

  10. Venture Capitalist Enabled Entrepreneurial Mentoring

    DEFF Research Database (Denmark)

    Agrawal, Anirudh

    2018-01-01

    Traditionally the success of a venture capital model has been anchored around two dimensions‚ namely equity as a trade for investment and start-up valuation and profitable exits. Scholars have focused less on the inter-organizational interaction between the venture capital (VC) and start......, have entrepreneurs as investors, and that engage frequently with investees over managerial and market issues. Using these cases, this study proposes an antecedent‚ action and outcome model of venture capital enabled entrepreneurial mentoring in India. This model can be expanded in the global context....

  11. Optimized microsystems-enabled photovoltaics

    Energy Technology Data Exchange (ETDEWEB)

    Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.

    2015-09-22

    Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.

  12. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  13. Efforts to enrich evidence for accurate diagnoses

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-10-01

    Full Text Available There are always difficulties to gain accurate diagnoses for complicated disease conditions, especially for the diagnoses of disorders with similar characteristics. With advanced technology, clinicians are able to detect tiny changes during the on-going diseased processes. Research papers in the current issue help to understand some features of some disorders. In this case, the current issue would provide some references or hints to the accurate diagnoses and the precisional therapies for some disorders.

  14. Accurate interpolation of 3D fields in charged particle optics.

    Science.gov (United States)

    Horák, Michal; Badin, Viktor; Zlámal, Jakub

    2018-03-29

    Standard 3D interpolation polynomials often suffer from numerical errors of the calculated field and lack of node points in the 3D solution. We introduce a novel method for accurate and smooth interpolation of arbitrary electromagnetic fields in the vicinity of the optical axis valid up to 90% of the bore radius. Our method combines Fourier analysis and Gaussian wavelet interpolation and provides the axial multipole field functions and their derivatives analytically. The results are accurate and noiseless, usually up to the 5th derivative. This is very advantageous for further applications, such as accurate particle tracing, and evaluation of aberration coefficients and other optical properties. The proposed method also enables studying the strength and orientation of all multipole field components. To illustrate the capabilities of the proposed algorithm, we present three examples: a magnetic lens with a hole in the polepiece, a saturated magnetic lens with an elliptic polepiece, and an electrostatic 8-electrode multipole. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Context-Enabled Business Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Troy Hiltbrand

    2012-04-01

    To truly understand context and apply it in business intelligence, it is vital to understand what context is and how it can be applied in addressing organizational needs. Context describes the facets of the environment that impact the way that end users interact with the system. Context includes aspects of location, chronology, access method, demographics, social influence/ relationships, end-user attitude/ emotional state, behavior/ past behavior, and presence. To be successful in making Business Intelligence content enabled, it is important to be able to capture the context of use user. With advances in technology, there are a number of ways in which this user based information can be gathered and exposed to enhance the overall end user experience.

  16. Simulation Enabled Safeguards Assessment Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.

  17. Spatially enabling the health sector

    Directory of Open Access Journals (Sweden)

    Tarun Stephen Weeramanthri

    2016-11-01

    Full Text Available Spatial information describes the physical location of either people or objects, and the measured relationships between them. In this article we offer the view that greater utilisation of spatial information and its related technology, as part of a broader redesign of the architecture of health information at local and national levels, could assist and speed up the process of health reform, which is taking place across the globe in richer and poorer countries alike.In making this point, we describe the impetus for health sector reform, recent developments in spatial information and analytics, and current Australasian spatial health research. We highlight examples of uptake of spatial information by the health sector, as well as missed opportunities. Our recommendations to spatially enable the health sector are applicable to high and low-resource settings.

  18. Simulation enabled safeguards assessment methodology

    International Nuclear Information System (INIS)

    Bean, Robert; Bjornard, Trond; Larson, Tom

    2007-01-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements in functionality. Drag and drop wire-frame construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed. (authors)

  19. Cyber-Enabled Scientific Discovery

    International Nuclear Information System (INIS)

    Chan, Tony; Jameson, Leland

    2007-01-01

    It is often said that numerical simulation is third in the group of three ways to explore modern science: theory, experiment and simulation. Carefully executed modern numerical simulations can, however, be considered at least as relevant as experiment and theory. In comparison to physical experimentation, with numerical simulation one has the numerically simulated values of every field variable at every grid point in space and time. In comparison to theory, with numerical simulation one can explore sets of very complex non-linear equations such as the Einstein equations that are very difficult to investigate theoretically. Cyber-enabled scientific discovery is not just about numerical simulation but about every possible issue related to scientific discovery by utilizing cyberinfrastructure such as the analysis and storage of large data sets, the creation of tools that can be used by broad classes of researchers and, above all, the education and training of a cyber-literate workforce

  20. Organizational Enablers for Project Governance

    DEFF Research Database (Denmark)

    Müller, Ralf; Shao, Jingting; Pemsel, Sofia

    While corporate culture plays a significant role in the success of any corporation, governance and “governmentality” not only determine how business should be conducted, but also define the policies and procedures organizations follow to achieve business functions and goals. In their book......, Organizational Enablers for Project Governance, Ralf Müller, Jingting Shao, and Sofia Pemsel examine the interaction of governance and governmentality in various types of companies and demonstrate how these factors drive business success and influence project work, efficiency, and profitability. The data...... for the studies was collected through interviews with six companies in Sweden and China and a global web-based questionnaire that garnered 208 responses. Using this data the authors conducted four studies, employing various research methodologies, to investigate the different systems of governance...

  1. Rotational propulsion enabled by inertia.

    Science.gov (United States)

    Nadal, François; Pak, On Shun; Zhu, LaiLai; Brandt, Luca; Lauga, Eric

    2014-07-01

    The fluid mechanics of small-scale locomotion has recently attracted considerable attention, due to its importance in cell motility and the design of artificial micro-swimmers for biomedical applications. Most studies on the topic consider the ideal limit of zero Reynolds number. In this paper, we investigate a simple propulsion mechanism --an up-down asymmetric dumbbell rotating about its axis of symmetry-- unable to propel in the absence of inertia in a Newtonian fluid. Inertial forces lead to continuous propulsion for all finite values of the Reynolds number. We study computationally its propulsive characteristics as well as analytically in the small-Reynolds-number limit. We also derive the optimal dumbbell geometry. The direction of propulsion enabled by inertia is opposite to that induced by viscoelasticity.

  2. Inflammatory biomarkers improve clinical prediction of mortality in chronic obstructive pulmonary disease

    DEFF Research Database (Denmark)

    Celli, Bartolome R; Locantore, Nicholas; Yates, Julie

    2012-01-01

    Accurate prediction of mortality helps select patients for interventions aimed at improving outcome.......Accurate prediction of mortality helps select patients for interventions aimed at improving outcome....

  3. Plasmonic Metallurgy Enabled by DNA.

    Science.gov (United States)

    Ross, Michael B; Ku, Jessie C; Lee, Byeongdu; Mirkin, Chad A; Schatz, George C

    2016-04-13

    Mixed silver and gold plasmonic nanoparticle architectures are synthesized using DNA-programmable assembly, unveiling exquisitely tunable optical properties that are predicted and explained both by effective thin-film models and explicit electrodynamic simulations. These data demonstrate that the manner and ratio with which multiple metallic components are arranged can greatly alter optical properties, including tunable color and asymmetric reflectivity behavior of relevance for thin-film applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  5. Accurate van der Waals coefficients from density functional theory

    Science.gov (United States)

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  6. Integrated Pathology Informatics Enables High-Quality Personalized and Precision Medicine: Digital Pathology and Beyond.

    Science.gov (United States)

    Volynskaya, Zoya; Chow, Hung; Evans, Andrew; Wolff, Alan; Lagmay-Traya, Cecilia; Asa, Sylvia L

    2018-03-01

    - The critical role of pathology in diagnosis, prognosis, and prediction demands high-quality subspecialty diagnostics that integrates information from multiple laboratories. - To identify key requirements and to establish a systematic approach to providing high-quality pathology in a health care system that is responsible for services across a large geographic area. - This report focuses on the development of a multisite pathology informatics platform to support high-quality surgical pathology and hematopathology using a sophisticated laboratory information system and whole slide imaging for histology and immunohistochemistry, integrated with ancillary tools, including electron microscopy, flow cytometry, cytogenetics, and molecular diagnostics. - These tools enable patients in numerous geographic locations access to a model of subspecialty pathology that allows reporting of every specimen by the right pathologist at the right time. The use of whole slide imaging for multidisciplinary case conferences enables better communication among members of patient care teams. The system encourages data collection using a discrete data synoptic reporting module, has implemented documentation of quality assurance activities, and allows workload measurement, providing examples of additional benefits that can be gained by this electronic approach to pathology. - This approach builds the foundation for accurate big data collection and high-quality personalized and precision medicine.

  7. Fundamental enabling issues in nanotechnology :

    Energy Technology Data Exchange (ETDEWEB)

    Floro, Jerrold Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Foiles, Stephen Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hearne, Sean Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoyt, Jeffrey John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Seel, Steven Craig [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Webb III, Edmund Blackburn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Morales, Alfredo Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zimmerman, Jonathan A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2007-10-01

    To effectively integrate nanotechnology into functional devices, fundamental aspects of material behavior at the nanometer scale must be understood. Stresses generated during thin film growth strongly influence component lifetime and performance; stress has also been proposed as a mechanism for stabilizing supported nanoscale structures. Yet the intrinsic connections between the evolving morphology of supported nanostructures and stress generation are still a matter of debate. This report presents results from a combined experiment and modeling approach to study stress evolution during thin film growth. Fully atomistic simulations are presented predicting stress generation mechanisms and magnitudes during all growth stages, from island nucleation to coalescence and film thickening. Simulations are validated by electrodeposition growth experiments, which establish the dependence of microstructure and growth stresses on process conditions and deposition geometry. Sandia is one of the few facilities with the resources to combine experiments and modeling/theory in this close a fashion. Experiments predicted an ongoing coalescence process that generates signficant tensile stress. Data from deposition experiments also supports the existence of a kinetically limited compressive stress generation mechanism. Atomistic simulations explored island coalescence and deposition onto surfaces intersected by grain boundary structures to permit investigation of stress evolution during later growth stages, e.g. continual island coalescence and adatom incorporation into grain boundaries. The predictive capabilities of simulation permit direct determination of fundamental processes active in stress generation at the nanometer scale while connecting those processes, via new theory, to continuum models for much larger island and film structures. Our combined experiment and simulation results reveal the necessary materials science to tailor stress, and therefore performance, in

  8. Enabling Technologies for High-Throughput Screening of Nano-Porous Materials: Collaboration with the Nanoporous Materials Genome Center

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Jordan [Univ. of Wisconsin, Madison, WI (United States). Dept. of Chemistry

    2016-01-21

    The overarching goal of this research was to develop new methodologies to enable the accurate and efficient modeling of complex materials using computer simulations. Using inter-molecular interaction energies calculated via an accurate but computationally expensive approach (symmetry-adapted perturbation theory), we parameterized efficient next-generation “force fields” to utilize in subsequent simulations. Since the resulting force fields incorporate much of the relevant physics of inter-molecular interactions, they consequently exhibit high transferability from one material to another. This transferability enables the modeling of a wide range of novel materials without additional computational cost. While this approach is quite general, a particular emphasis of this research involved applications to so-called “metal-organic framework”(MOF) materials relevant to energy-intensive gas separations. We focused specifically on CO2/N2 selectivity, which is a key metric for post combustion CO2 capture efforts at coal-fired power plants. The gas adsorption capacities and selectivity of the MOFs can be tailored via careful functionalization. We have demonstrated that our force fields exhibit predictive accuracy for a wide variety of functionalized MOFs, thus opening the door for the computational design of “tailored” materials for particular separations. Finally, we have also demonstrated the importance of accounting for the presence of reactive contaminant species when evaluating the performance of MOFs in practical applications.

  9. More accurate determination of coal stockpile inventories

    Energy Technology Data Exchange (ETDEWEB)

    Wright, P.L.; Hernadi, N.

    1978-08-01

    The coal density within a 12 m high stockpile can range from 836 kg/m/SUP/3 to 948 kg/m/SUP/3. Thus the use of a constant density to calculate the stockpile inventory could lead to a variation on a nominal 100,000 tonne stockpile of 12,500 tonnes. At present day metallurgical coal prices, this represents a range in inventory value of over 600,000 dollars. A more accurate evaluation of the stockpile inventory can easily be achieved by the simple expedient of including the density variations in the calculations. Laboratory compression tests at different moisture contents have been used to simulate the coal compaction, and to derive practical densities for coal at various heights within the stockpile. The resulting graphs of density variation with stockpile height at different moisture contents can be combined with accurate cross sections to volumetric surveys to produce on accurate quantification of the stockpile for inventory purposes.

  10. MultiNotch MS3 enables accurate, sensitive, and multiplexed detection of differential expression across cancer cell line proteomes.

    Science.gov (United States)

    McAlister, Graeme C; Nusinow, David P; Jedrychowski, Mark P; Wühr, Martin; Huttlin, Edward L; Erickson, Brian K; Rad, Ramin; Haas, Wilhelm; Gygi, Steven P

    2014-07-15

    Multiplexed quantitation via isobaric chemical tags (e.g., tandem mass tags (TMT) and isobaric tags for relative and absolute quantitation (iTRAQ)) has the potential to revolutionize quantitative proteomics. However, until recently the utility of these tags was questionable due to reporter ion ratio distortion resulting from fragmentation of coisolated interfering species. These interfering signals can be negated through additional gas-phase manipulations (e.g., MS/MS/MS (MS3) and proton-transfer reactions (PTR)). These methods, however, have a significant sensitivity penalty. Using isolation waveforms with multiple frequency notches (i.e., synchronous precursor selection, SPS), we coisolated and cofragmented multiple MS2 fragment ions, thereby increasing the number of reporter ions in the MS3 spectrum 10-fold over the standard MS3 method (i.e., MultiNotch MS3). By increasing the reporter ion signals, this method improves the dynamic range of reporter ion quantitation, reduces reporter ion signal variance, and ultimately produces more high-quality quantitative measurements. To demonstrate utility, we analyzed biological triplicates of eight colon cancer cell lines using the MultiNotch MS3 method. Across all the replicates we quantified 8,378 proteins in union and 6,168 proteins in common. Taking into account that each of these quantified proteins contains eight distinct cell-line measurements, this data set encompasses 174,704 quantitative ratios each measured in triplicate across the biological replicates. Herein, we demonstrate that the MultiNotch MS3 method uniquely combines multiplexing capacity with quantitative sensitivity and accuracy, drastically increasing the informational value obtainable from proteomic experiments.

  11. Enabling individualized therapy through nanotechnology

    Science.gov (United States)

    Sakamoto, Jason H.; van de Ven, Anne L.; Godin, Biana; Blanco, Elvin; Serda, Rita E.; Grattoni, Alessandro; Ziemys, Arturas; Bouamrani, Ali; Hu, Tony; Ranganathan, Shivakumar I.; De Rosa, Enrica; Martinez, Jonathan O.; Smid, Christine A.; Buchanan, Rachel M.; Lee, Sei-Young; Srinivasan, Srimeenakshi; Landry, Matthew; Meyn, Anne; Tasciotti, Ennio; Liu, Xuewu; Decuzzi, Paolo; Ferrari, Mauro

    2010-01-01

    Individualized medicine is the healthcare strategy that rebukes the idiomatic dogma of ‘losing sight of the forest for the trees’. We are entering a new era of healthcare where it is no longer acceptable to develop and market a drug that is effective for only 80% of the patient population. The emergence of “-omic” technologies (e.g. genomics, transcriptomics, proteomics, metabolomics) and advances in systems biology are magnifying the deficiencies of standardized therapy, which often provide little treatment latitude for accommodating patient physiologic idiosyncrasies. A personalized approach to medicine is not a novel concept. Ever since the scientific community began unraveling the mysteries of the genome, the promise of discarding generic treatment regimens in favor of patient-specific therapies became more feasible and realistic. One of the major scientific impediments of this movement towards personalized medicine has been the need for technological enablement. Nanotechnology is projected to play a critical role in patient-specific therapy; however, this transition will depend heavily upon the evolutionary development of a systems biology approach to clinical medicine based upon “-omic” technology analysis and integration. This manuscript provides a forward looking assessment of the promise of nanomedicine as it pertains to individualized medicine and establishes a technology “snapshot” of the current state of nano-based products over a vast array of clinical indications and range of patient specificity. Other issues such as market driven hurdles and regulatory compliance reform are anticipated to “self-correct” in accordance to scientific advancement and healthcare demand. These peripheral, non-scientific concerns are not addressed at length in this manuscript; however they do exist, and their impact to the paradigm shifting healthcare transformation towards individualized medicine will be critical for its success. PMID:20045055

  12. Solar Glitter -- Microsystems Enabled Photovoltaics

    Science.gov (United States)

    Nielson, Gregory N.

    2012-02-01

    Many products have significantly benefitted from, or been enabled by, the ability to manufacture structures at an ever decreasing length scale. Obvious examples of this include integrated circuits, flat panel displays, micro-scale sensors, and LED lighting. These industries have benefited from length scale effects in terms of improved performance, reduced cost, or new functionality (or a combination of these). In a similar manner, we are working to take advantage of length scale effects that exist within solar photovoltaic (PV) systems. While this is a significant step away from traditional approaches to solar power systems, the benefits in terms of new functionality, improved performance, and reduced cost for solar power are compelling. We are exploring scale effects that result from the size of the solar cells within the system. We have developed unique cells of both crystalline silicon and III-V materials that are very thin (5-20 microns thick) and have very small lateral dimensions (on the order of hundreds of microns across). These cells minimize the amount of expensive semiconductor material required for the system, allow improved cell performance, and provide an expanded design space for both module and system concepts allowing optimized power output and reduced module and balance of system costs. Furthermore, the small size of the cells allows for unique high-efficiency, high-flexibility PV panels and new building-integrated PV options that are currently unavailable. These benefits provide a pathway for PV power to become cost competitive with grid power and allow unique power solutions independent of grid power.

  13. Accurate Measurement of the Effects of All Amino-Acid Mutations on Influenza Hemagglutinin.

    Science.gov (United States)

    Doud, Michael B; Bloom, Jesse D

    2016-06-03

    Influenza genes evolve mostly via point mutations, and so knowing the effect of every amino-acid mutation provides information about evolutionary paths available to the virus. We and others have combined high-throughput mutagenesis with deep sequencing to estimate the effects of large numbers of mutations to influenza genes. However, these measurements have suffered from substantial experimental noise due to a variety of technical problems, the most prominent of which is bottlenecking during the generation of mutant viruses from plasmids. Here we describe advances that ameliorate these problems, enabling us to measure with greatly improved accuracy and reproducibility the effects of all amino-acid mutations to an H1 influenza hemagglutinin on viral replication in cell culture. The largest improvements come from using a helper virus to reduce bottlenecks when generating viruses from plasmids. Our measurements confirm at much higher resolution the results of previous studies suggesting that antigenic sites on the globular head of hemagglutinin are highly tolerant of mutations. We also show that other regions of hemagglutinin-including the stalk epitopes targeted by broadly neutralizing antibodies-have a much lower inherent capacity to tolerate point mutations. The ability to accurately measure the effects of all influenza mutations should enhance efforts to understand and predict viral evolution.

  14. Accurate Measurement of the Effects of All Amino-Acid Mutations on Influenza Hemagglutinin

    Directory of Open Access Journals (Sweden)

    Michael B. Doud

    2016-06-01

    Full Text Available Influenza genes evolve mostly via point mutations, and so knowing the effect of every amino-acid mutation provides information about evolutionary paths available to the virus. We and others have combined high-throughput mutagenesis with deep sequencing to estimate the effects of large numbers of mutations to influenza genes. However, these measurements have suffered from substantial experimental noise due to a variety of technical problems, the most prominent of which is bottlenecking during the generation of mutant viruses from plasmids. Here we describe advances that ameliorate these problems, enabling us to measure with greatly improved accuracy and reproducibility the effects of all amino-acid mutations to an H1 influenza hemagglutinin on viral replication in cell culture. The largest improvements come from using a helper virus to reduce bottlenecks when generating viruses from plasmids. Our measurements confirm at much higher resolution the results of previous studies suggesting that antigenic sites on the globular head of hemagglutinin are highly tolerant of mutations. We also show that other regions of hemagglutinin—including the stalk epitopes targeted by broadly neutralizing antibodies—have a much lower inherent capacity to tolerate point mutations. The ability to accurately measure the effects of all influenza mutations should enhance efforts to understand and predict viral evolution.

  15. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  16. Quantitative self-assembly prediction yields targeted nanomedicines

    Science.gov (United States)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  17. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  18. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  19. A novel and accurate diagnostic test for human African trypanosomiasis.

    Science.gov (United States)

    Papadopoulos, Marios C; Abel, Paulo M; Agranoff, Dan; Stich, August; Tarelli, Edward; Bell, B Anthony; Planche, Timothy; Loosemore, Alison; Saadoun, Samira; Wilkins, Peter; Krishna, Sanjeev

    2004-04-24

    Human African trypanosomiasis (sleeping sickness) affects up to half a million people every year in sub-Saharan Africa. Because current diagnostic tests for the disease have low accuracy, we sought to develop a novel test that can diagnose human African trypanosomiasis with high sensitivity and specificity. We applied serum samples from 85 patients with African trypanosomiasis and 146 control patients who had other parasitic and non-parasitic infections to a weak cation exchange chip, and analysed with surface-enhanced laser desorption-ionisation time-of-flight mass spectrometry. Mass spectra were then assessed with three powerful data-mining tools: a tree classifier, a neural network, and a genetic algorithm. Spectra (2-100 kDa) were grouped into training (n=122) and testing (n=109) sets. The training set enabled data-mining software to identify distinct serum proteomic signatures characteristic of human African trypanosomiasis among 206 protein clusters. Sensitivity and specificity, determined with the testing set, were 100% and 98.6%, respectively, when the majority opinion of the three algorithms was considered. This novel approach is much more accurate than any other diagnostic test. Our report of the accurate diagnosis of an infection by use of proteomic signature analysis could form the basis for diagnostic tests for the disease, monitoring of response to treatment, and for improving the accuracy of patient recruitment in large-scale epidemiological studies.

  20. Accurate Sample Time Reconstruction of Inertial FIFO Data.

    Science.gov (United States)

    Stieber, Sebastian; Dorsch, Rainer; Haubelt, Christian

    2017-12-13

    In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO) interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts-introduced by fabrication inaccuracies, temperature changes and wear-out effects-onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS) technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  1. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  2. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  3. Heap: a highly sensitive and accurate SNP detection tool for low-coverage high-throughput sequencing data

    KAUST Repository

    Kobayashi, Masaaki

    2017-04-20

    Recent availability of large-scale genomic resources enables us to conduct so called genome-wide association studies (GWAS) and genomic prediction (GP) studies, particularly with next-generation sequencing (NGS) data. The effectiveness of GWAS and GP depends on not only their mathematical models, but the quality and quantity of variants employed in the analysis. In NGS single nucleotide polymorphism (SNP) calling, conventional tools ideally require more reads for higher SNP sensitivity and accuracy. In this study, we aimed to develop a tool, Heap, that enables robustly sensitive and accurate calling of SNPs, particularly with a low coverage NGS data, which must be aligned to the reference genome sequences in advance. To reduce false positive SNPs, Heap determines genotypes and calls SNPs at each site except for sites at the both ends of reads or containing a minor allele supported by only one read. Performance comparison with existing tools showed that Heap achieved the highest F-scores with low coverage (7X) restriction-site associated DNA sequencing reads of sorghum and rice individuals. This will facilitate cost-effective GWAS and GP studies in this NGS era. Code and documentation of Heap are freely available from https://github.com/meiji-bioinf/heap (29 March 2017, date last accessed) and our web site (http://bioinf.mind.meiji.ac.jp/lab/en/tools.html (29 March 2017, date last accessed)).

  4. Operator overloading as an enabling technology for automatic differentiation

    International Nuclear Information System (INIS)

    Corliss, G.F.; Griewank, A.

    1993-01-01

    We present an example of the science that is enabled by object-oriented programming techniques. Scientific computation often needs derivatives for solving nonlinear systems such as those arising in many PDE algorithms, optimization, parameter identification, stiff ordinary differential equations, or sensitivity analysis. Automatic differentiation computes derivatives accurately and efficiently by applying the chain rule to each arithmetic operation or elementary function. Operator overloading enables the techniques of either the forward or the reverse mode of automatic differentiation to be applied to real-world scientific problems. We illustrate automatic differentiation with an example drawn from a model of unsaturated flow in a porous medium. The problem arises from planning for the long-term storage of radioactive waste

  5. Accurate maser positions for MALT-45

    Science.gov (United States)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  6. ESG: extended similarity group method for automated protein function prediction.

    Science.gov (United States)

    Chitale, Meghana; Hawkins, Troy; Park, Changsoon; Kihara, Daisuke

    2009-07-15

    Importance of accurate automatic protein function prediction is ever increasing in the face of a large number of newly sequenced genomes and proteomics data that are awaiting biological interpretation. Conventional methods have focused on high sequence similarity-based annotation transfer which relies on the concept of homology. However, many cases have been reported that simple transfer of function from top hits of a homology search causes erroneous annotation. New methods are required to handle the sequence similarity in a more robust way to combine together signals from strongly and weakly similar proteins for effectively predicting function for unknown proteins with high reliability. We present the extended similarity group (ESG) method, which performs iterative sequence database searches and annotates a query sequence with Gene Ontology terms. Each annotation is assigned with probability based on its relative similarity score with the multiple-level neighbors in the protein similarity graph. We will depict how the statistical framework of ESG improves the prediction accuracy by iteratively taking into account the neighborhood of query protein in the sequence similarity space. ESG outperforms conventional PSI-BLAST and the protein function prediction (PFP) algorithm. It is found that the iterative search is effective in capturing multiple-domains in a query protein, enabling accurately predicting several functions which originate from different domains. ESG web server is available for automated protein function prediction at http://dragon.bio.purdue.edu/ESG/.

  7. Predicting novel metabolic pathways through subgraph mining.

    Science.gov (United States)

    Sankar, Aravind; Ranu, Sayan; Raman, Karthik

    2017-12-15

    The ability to predict pathways for biosynthesis of metabolites is very important in metabolic engineering. It is possible to mine the repertoire of biochemical transformations from reaction databases, and apply the knowledge to predict reactions to synthesize new molecules. However, this usually involves a careful understanding of the mechanism and the knowledge of the exact bonds being created and broken. There is a need for a method to rapidly predict reactions for synthesizing new molecules, which relies only on the structures of the molecules, without demanding additional information such as thermodynamics or hand-curated reactant mapping, which are often hard to obtain accurately. We here describe a robust method based on subgraph mining, to predict a series of biochemical transformations, which can convert between two (even previously unseen) molecules. We first describe a reliable method based on subgraph edit distance to map reactants and products, using only their chemical structures. Having mapped reactants and products, we identify the reaction centre and its neighbourhood, the reaction signature, and store this in a reaction rule network. This novel representation enables us to rapidly predict pathways, even between previously unseen molecules. We demonstrate this ability by predicting pathways to molecules not present in the KEGG database. We also propose a heuristic that predominantly recovers natural biosynthetic pathways from amongst hundreds of possible alternatives, through a directed search of the reaction rule network, enabling us to provide a reliable ranking of the different pathways. Our approach scales well, even to databases with >100 000 reactions. A Java-based implementation of our algorithms is available at https://github.com/RamanLab/ReactionMiner. sayanranu@cse.iitd.ac.in or kraman@iitm.ac.in. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For

  8. Long Range Aircraft Trajectory Prediction

    OpenAIRE

    Magister, Tone

    2009-01-01

    The subject of the paper is the improvement of the aircraft future trajectory prediction accuracy for long-range airborne separation assurance. The strategic planning of safe aircraft flights and effective conflict avoidance tactics demand timely and accurate conflict detection based upon future four–dimensional airborne traffic situation prediction which is as accurate as each aircraft flight trajectory prediction. The improved kinematics model of aircraft relative flight considering flight ...

  9. Accurate analytical representation of Pluto modern ephemeris

    Science.gov (United States)

    Kudryavtsev, Sergey M.; Kudryavtseva, Natalia S.

    2009-12-01

    An accurate development of the latest JPL’s numerical ephemeris of Pluto, DE421, to compact analytical series is done. Rectangular barycentric ICRF coordinates of Pluto from DE421 are approximated by compact Fourier series with a maximum error of 1.3 km over 1900-2050 (the entire time interval covered by the ephemeris). To calculate Pluto positions relative to the Sun, a development of rectangular heliocentric ICRF coordinates of the Solar System barycenter to Poisson series is additionally made. As a result, DE421 Pluto heliocentric positions by the new analytical series are represented to an accuracy of better than 5 km over 1900-2050.

  10. Accurate Charge Densities from Powder Diffraction

    DEFF Research Database (Denmark)

    Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob

    Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...

  11. Hyper-accurate ribosomes inhibit growth.

    OpenAIRE

    Ruusala, T; Andersson, D; Ehrenberg, M; Kurland, C G

    1984-01-01

    We have compared both in vivo and in vitro translation by ribosomes from wild-type bacteria with those from streptomycin-resistant (SmR), streptomycin-dependent (SmD) and streptomycin-pseudo-dependent (SmP) mutants. The three mutant bacteria translate more accurately and more slowly in the absence of streptomycin (Sm) than do wild-type bacteria. In particular, the SmP bacteria grow at roughly half the rate of the wild-type in the absence of Sm. The antibiotic stimulates both the growth rate a...

  12. Innovation Enablers for Innovation Teams - A Review

    OpenAIRE

    Johnsson, Mikael

    2017-01-01

    This review consolidates research on innovation enablers for innovation teams, defined within this research as factors that enable a crossfunctional team within an organization to conduct innovation work, to provide a deeper understanding of what factors enable innovation teams to conduct innovation work, which means that this research involves three areas to provide a holistic picture: the organizational context, the team itself, and the individuals within the innovation team. A systematic d...

  13. An internet enabled impact limiter material database

    International Nuclear Information System (INIS)

    Wix, S.; Kanipe, F.; McMurtry, W.

    1998-01-01

    This paper presents a detailed explanation of the construction of an internet enabled database, also known as a database driven web site. The data contained in the internet enabled database are impact limiter material and seal properties. The techniques used in constructing the internet enabled database presented in this paper are applicable when information that is changing in content needs to be disseminated to a wide audience. (authors)

  14. An Internet enabled impact limiter material database

    International Nuclear Information System (INIS)

    Wix, S.; Kanipe, F.; McMurtry, W.

    1998-01-01

    This paper presents a detailed explanation of the construction of an interest enabled database, also known as a database driven web site. The data contained in the internet enabled database are impact limiter material and seal properties. The technique used in constructing the internet enabled database presented in this paper are applicable when information that is changing in content needs to be disseminated to a wide audience

  15. Hyper-accurate ribosomes inhibit growth.

    Science.gov (United States)

    Ruusala, T; Andersson, D; Ehrenberg, M; Kurland, C G

    1984-11-01

    We have compared both in vivo and in vitro translation by ribosomes from wild-type bacteria with those from streptomycin-resistant (SmR), streptomycin-dependent (SmD) and streptomycin-pseudo-dependent (SmP) mutants. The three mutant bacteria translate more accurately and more slowly in the absence of streptomycin (Sm) than do wild-type bacteria. In particular, the SmP bacteria grow at roughly half the rate of the wild-type in the absence of Sm. The antibiotic stimulates both the growth rate and the translation rate of SmP bacteria by approximately 2-fold, but it simultaneously increases the nonsense suppression rate quite dramatically. Kinetic experiments in vitro show that the greater accuracy and slower translation rates of mutant ribosomes compared with wild-type ribosomes are associated with much more rigorous proofreading activities of SmR, SmD and SmP ribosomes. Sm reduces the proofreading flows of the mutant ribosomes and stimulates their elongation rates. The data suggest that these excessively accurate ribosomes are kinetically less efficient than wild-type ribosomes, and that this inhibits mutant growth rates. The stimulation of the growth of the mutants by Sm results from the enhanced translational efficiency due to the loss of proofreading, which more than offsets the loss of accuracy caused by the antibiotic.

  16. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  17. Semantic Sensor Web Enablement for COAST Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Sensor Web Enablement (SWE) is an Open Geospatial Consortium (OGC) standard Service Oriented Architecture (SOA) that facilitates discovery and integration of...

  18. "Nanotechnology Enabled Advanced Industrial Heat Transfer Fluids"

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Ganesh Skandan; Dr. Amit Singhal; Mr. Kenneth Eberts; Mr. Damian Sobrevilla; Prof. Jerry Shan; Stephen Tse; Toby Rossmann

    2008-06-12

    ABSTRACT Nanotechnology Enabled Advanced industrial Heat Transfer Fluids” Improving the efficiency of Industrial Heat Exchangers offers a great opportunity to improve overall process efficiencies in diverse industries such as pharmaceutical, materials manufacturing and food processing. The higher efficiencies can come in part from improved heat transfer during both cooling and heating of the material being processed. Additionally, there is great interest in enhancing the performance and reducing the weight of heat exchangers used in automotives in order to increase fuel efficiency. The goal of the Phase I program was to develop nanoparticle containing heat transfer fluids (e.g., antifreeze, water, silicone and hydrocarbon-based oils) that are used in transportation and in the chemical industry for heating, cooling and recovering waste heat. Much work has been done to date at investigating the potential use of nanoparticle-enhanced thermal fluids to improve heat transfer in heat exchangers. In most cases the effect in a commercial heat transfer fluid has been marginal at best. In the Phase I work, we demonstrated that the thermal conductivity, and hence heat transfer, of a fluid containing nanoparticles can be dramatically increased when subjected to an external influence. The increase in thermal conductivity was significantly larger than what is predicted by commonly used thermal models for two-phase materials. Additionally, the surface of the nanoparticles was engineered so as to have a minimal influence on the viscosity of the fluid. As a result, a nanoparticle-laden fluid was successfully developed that can lead to enhanced heat transfer in both industrial and automotive heat exchangers

  19. DFT-based Green's function pathways model for prediction of bridge-mediated electronic coupling.

    Science.gov (United States)

    Berstis, Laura; Baldridge, Kim K

    2015-12-14

    A density functional theory-based Green's function pathway model is developed enabling further advancements towards the long-standing challenge of accurate yet inexpensive prediction of electron transfer rate. Electronic coupling predictions are demonstrated to within 0.1 eV of experiment for organic and biological systems of moderately large size, with modest computational expense. Benchmarking and comparisons are made across density functional type, basis set extent, and orbital localization scheme. The resulting framework is shown to be flexible and to offer quantitative prediction of both electronic coupling and tunneling pathways in covalently bound non-adiabatic donor-bridge-acceptor (D-B-A) systems. A new localized molecular orbital Green's function pathway method (LMO-GFM) adaptation enables intuitive understanding of electron tunneling in terms of through-bond and through-space interactions.

  20. FSR: feature set reduction for scalable and accurate multi-class cancer subtype classification based on copy number.

    Science.gov (United States)

    Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam

    2012-01-15

    Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.

  1. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  2. Evaluation of Machine Learning Methods to Predict Coronary Artery Disease Using Metabolomic Data.

    Science.gov (United States)

    Forssen, Henrietta; Patel, Riyaz; Fitzpatrick, Natalie; Hingorani, Aroon; Timmis, Adam; Hemingway, Harry; Denaxas, Spiros

    2017-01-01

    Metabolomic data can potentially enable accurate, non-invasive and low-cost prediction of coronary artery disease. Regression-based analytical approaches however might fail to fully account for interactions between metabolites, rely on a priori selected input features and thus might suffer from poorer accuracy. Supervised machine learning methods can potentially be used in order to fully exploit the dimensionality and richness of the data. In this paper, we systematically implement and evaluate a set of supervised learning methods (L1 regression, random forest classifier) and compare them to traditional regression-based approaches for disease prediction using metabolomic data.

  3. Accurate modeling of defects in graphene transport calculations

    Science.gov (United States)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  4. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Science.gov (United States)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  5. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    Science.gov (United States)

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  6. How accurately can 21cm tomography constrain cosmology?

    Science.gov (United States)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  7. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  8. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  9. Enabling Routes as Context in Mobile Services

    DEFF Research Database (Denmark)

    Brilingaite, Agne; Jensen, Christian Søndergaard; Zokaite, Nora

    2004-01-01

    With the continuing advances in wireless communications, geo-positioning, and portable electronics, an infrastructure is emerging that enables the delivery of on-line, location-enabled services to very large numbers of mobile users. A typical usage situation for mobile services is one characterized...

  10. Establishing Accurate and Sustainable Geospatial Reference Layers in Developing Countries

    Science.gov (United States)

    Seaman, V. Y.

    2017-12-01

    Accurate geospatial reference layers (settlement names & locations, administrative boundaries, and population) are not readily available for most developing countries. This critical information gap makes it challenging for governments to efficiently plan, allocate resources, and provide basic services. It also hampers international agencies' response to natural disasters, humanitarian crises, and other emergencies. The current work involves a recent successful effort, led by the Bill & Melinda Gates Foundation and the Government of Nigeria, to obtain such data. The data collection began in 2013, with local teams collecting names, coordinates, and administrative attributes for over 100,000 settlements using ODK-enabled smartphones. A settlement feature layer extracted from satellite imagery was used to ensure all settlements were included. Administrative boundaries (Ward, LGA) were created using the settlement attributes. These "new" boundary layers were much more accurate than existing shapefiles used by the government and international organizations. The resulting data sets helped Nigeria eradicate polio from all areas except in the extreme northeast, where security issues limited access and vaccination activities. In addition to the settlement and boundary layers, a GIS-based population model was developed, in partnership with Oak Ridge National Laboratories and Flowminder), that used the extracted settlement areas and characteristics, along with targeted microcensus data. This model provides population and demographics estimates independent of census or other administrative data, at a resolution of 90 meters. These robust geospatial data layers found many other uses, including establishing catchment area settlements and populations for health facilities, validating denominators for population-based surveys, and applications across a variety of government sectors. Based on the success of the Nigeria effort, a partnership between DfID and the Bill & Melinda Gates

  11. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  12. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  13. MRI can accurately detect meniscal ramp lesions of the knee.

    Science.gov (United States)

    Arner, Justin W; Herbst, Elmar; Burnham, Jeremy M; Soni, Ashish; Naendrup, Jan-Hendrik; Popchak, Adam; Fu, Freddie H; Musahl, Volker

    2017-12-01

    Posterior horn meniscal tears are commonly found in conjunction with anterior cruciate ligament (ACL) injury. Some believe tears in the posterior meniscocapsular zone, coined ramp lesions, are important to knee stability. The purpose of this study was to determine whether pre-operative MRI evaluation was able to accurately and reproducibly identify ramp lesions. Three blinded reviewers assessed MRIs twice for the presence of ramp lesions in patients undergoing ACL reconstruction. Sensitivity, specificity, negative predictive value, and positive predictive value for MRI were calculated based on arthroscopic diagnosis of a ramp lesion. Intra-class correlation coefficient was calculated to assess intra- and interobserver reliability of the MRI assessment between the three examiners. Significance was set at p lesions, while the other 77 had other meniscal pathology. Sensitivity of detecting a ramp lesion on MRI ranged from 53.9 to 84.6%, while specificity was 92.3-98.7%. Negative predictive value was 91.1-97.4%, while positive predictive value was 50.0-90.0%. Inter-rater reliability between three reviewers was moderate at 0.56. The observers had excellent intra-rater reliability ranging from 0.75 to 0.81. This study demonstrates high sensitivity and excellent specificity in detecting meniscal ramp lesions on MRI. Ramp lesions are likely more common and may have greater clinical implications than previously appreciated; the outcomes of untreated lesions must be investigated. Pre-operative identification of ramp lesions may aid clinicians in surgical planning and patient education to improve outcomes by addressing pathology which may have otherwise been missed. III.

  14. Toward integration of genomic selection with crop modelling: the development of an integrated approach to predicting rice heading dates.

    Science.gov (United States)

    Onogi, Akio; Watanabe, Maya; Mochizuki, Toshihiro; Hayashi, Takeshi; Nakagawa, Hiroshi; Hasegawa, Toshihiro; Iwata, Hiroyoshi

    2016-04-01

    It is suggested that accuracy in predicting plant phenotypes can be improved by integrating genomic prediction with crop modelling in a single hierarchical model. Accurate prediction of phenotypes is important for plant breeding and management. Although genomic prediction/selection aims to predict phenotypes on the basis of whole-genome marker information, it is often difficult to predict phenotypes of complex traits in diverse environments, because plant phenotypes are often influenced by genotype-environment interaction. A possible remedy is to integrate genomic prediction with crop/ecophysiological modelling, which enables us to predict plant phenotypes using environmental and management information. To this end, in the present study, we developed a novel method for integrating genomic prediction with phenological modelling of Asian rice (Oryza sativa, L.), allowing the heading date of untested genotypes in untested environments to be predicted. The method simultaneously infers the phenological model parameters and whole-genome marker effects on the parameters in a Bayesian framework. By cultivating backcross inbred lines of Koshihikari × Kasalath in nine environments, we evaluated the potential of the proposed method in comparison with conventional genomic prediction, phenological modelling, and two-step methods that applied genomic prediction to phenological model parameters inferred from Nelder-Mead or Markov chain Monte Carlo algorithms. In predicting heading dates of untested lines in untested environments, the proposed and two-step methods tended to provide more accurate predictions than the conventional genomic prediction methods, particularly in environments where phenotypes from environments similar to the target environment were unavailable for training genomic prediction. The proposed method showed greater accuracy in prediction than the two-step methods in all cross-validation schemes tested, suggesting the potential of the integrated approach in

  15. Programming Useful Life Prediction (PULP) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Accurately predicting Remaining Useful Life (RUL) provides significant benefits—it increases safety and reduces financial and labor resource requirements....

  16. THE HYPERFINE STRUCTURE OF THE ROTATIONAL SPECTRUM OF HDO AND ITS EXTENSION TO THE THz REGION: ACCURATE REST FREQUENCIES AND SPECTROSCOPIC PARAMETERS FOR ASTROPHYSICAL OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Cazzoli, Gabriele; Lattanzi, Valerio; Puzzarini, Cristina [Dipartimento di Chimica “Giacomo Ciamician”, Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Alonso, José Luis [Grupo de Espectroscopía Molecular (GEM), Unidad Asociada CSIC, Edificio Quifima, Laboratorios de Espectroscopia y Bioespectroscopia, Parque Científico UVa, Universidad de Valladolid, E-47005 Valladolid (Spain); Gauss, Jürgen, E-mail: cristina.puzzarini@unibo.it [Institut für Physikalische Chemie, Universität Mainz, D-55099 Mainz (Germany)

    2015-06-10

    The rotational spectrum of the mono-deuterated isotopologue of water, HD{sup 16}O, has been investigated in the millimeter- and submillimeter-wave frequency regions, up to 1.6 THz. The Lamb-dip technique has been exploited to obtain sub-Doppler resolution and to resolve the hyperfine (hf) structure due to the deuterium and hydrogen nuclei, thus enabling the accurate determination of the corresponding hf parameters. Their experimental determination has been supported by high-level quantum-chemical calculations. The Lamb-dip measurements have been supplemented by Doppler-limited measurements (weak high-J and high-frequency transitions) in order to extend the predictive capability of the available spectroscopic constants. The possibility of resolving hf splittings in astronomical spectra has been discussed.

  17. MyoScreen, a High-Throughput Phenotypic Screening Platform Enabling Muscle Drug Discovery.

    Science.gov (United States)

    Young, Joanne; Margaron, Yoran; Fernandes, Mathieu; Duchemin-Pelletier, Eve; Michaud, Joris; Flaender, Mélanie; Lorintiu, Oana; Degot, Sébastien; Poydenot, Pauline

    2018-03-01

    Despite the need for more effective drug treatments to address muscle atrophy and disease, physiologically accurate in vitro screening models and higher information content preclinical assays that aid in the discovery and development of novel therapies are lacking. To this end, MyoScreen was developed: a robust and versatile high-throughput high-content screening (HT/HCS) platform that integrates a physiologically and pharmacologically relevant micropatterned human primary skeletal muscle model with a panel of pertinent phenotypic and functional assays. MyoScreen myotubes form aligned, striated myofibers, and they show nerve-independent accumulation of acetylcholine receptors (AChRs), excitation-contraction coupling (ECC) properties characteristic of adult skeletal muscle and contraction in response to chemical stimulation. Reproducibility and sensitivity of the fully automated MyoScreen platform are highlighted in assays that quantitatively measure myogenesis, hypertrophy and atrophy, AChR clusterization, and intracellular calcium release dynamics, as well as integrating contractility data. A primary screen of 2560 compounds to identify stimulators of myofiber regeneration and repair, followed by further biological characterization of two hits, validates MyoScreen for the discovery and testing of novel therapeutics. MyoScreen is an improvement of current in vitro muscle models, enabling a more predictive screening strategy for preclinical selection of the most efficacious new chemical entities earlier in the discovery pipeline process.

  18. Electronic Health Record Application Support Service Enablers.

    Science.gov (United States)

    Neofytou, M S; Neokleous, K; Aristodemou, A; Constantinou, I; Antoniou, Z; Schiza, E C; Pattichis, C S; Schizas, C N

    2015-08-01

    There is a huge need for open source software solutions in the healthcare domain, given the flexibility, interoperability and resource savings characteristics they offer. In this context, this paper presents the development of three open source libraries - Specific Enablers (SEs) for eHealth applications that were developed under the European project titled "Future Internet Social and Technological Alignment Research" (FI-STAR) funded under the "Future Internet Public Private Partnership" (FI-PPP) program. The three SEs developed under the Electronic Health Record Application Support Service Enablers (EHR-EN) correspond to: a) an Electronic Health Record enabler (EHR SE), b) a patient summary enabler based on the EU project "European patient Summary Open Source services" (epSOS SE) supporting patient mobility and the offering of interoperable services, and c) a Picture Archiving and Communications System (PACS) enabler (PACS SE) based on the dcm4che open source system for the support of medical imaging functionality. The EHR SE follows the HL7 Clinical Document Architecture (CDA) V2.0 and supports the Integrating the Healthcare Enterprise (IHE) profiles (recently awarded in Connectathon 2015). These three FI-STAR platform enablers are designed to facilitate the deployment of innovative applications and value added services in the health care sector. They can be downloaded from the FI-STAR cataloque website. Work in progress focuses in the validation and evaluation scenarios for the proving and demonstration of the usability, applicability and adaptability of the proposed enablers.

  19. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  20. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    Science.gov (United States)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  1. Sparsity enabled cluster reduced-order models for control

    Science.gov (United States)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  2. Accurate determination of the Boltzmann constant by Doppler spectroscopy: Towards a new definition of the kelvin

    OpenAIRE

    Darquié Benoît; Mejri Sinda; Sow Papa Lat Tabara; Lemarchand Cyril; Triki Meriam; Tokunaga Sean K.; Bordé Christian J.; Chardonnet Christian; Daussy Christophe

    2013-01-01

    proceedings of the ICAP 2012 conference (23rd International Conference on Atomic Physics); International audience; Accurate molecular spectroscopy in the mid-infrared region allows precision measurements of fundamental constants. For instance, measuring the linewidth of an isolated Doppler-broadened absorption line of ammonia around 10 µm enables a determination of the Boltzmann constant k B. We report on our latest measurements. By fitting this lineshape to several models which include Dicke...

  3. Fast and accurate read alignment for resequencing.

    Science.gov (United States)

    Mu, John C; Jiang, Hui; Kiani, Amirhossein; Mohiyuddin, Marghoob; Bani Asadi, Narges; Wong, Wing H

    2012-09-15

    Next-generation sequence analysis has become an important task both in laboratory and clinical settings. A key stage in the majority sequence analysis workflows, such as resequencing, is the alignment of genomic reads to a reference genome. The accurate alignment of reads with large indels is a computationally challenging task for researchers. We introduce SeqAlto as a new algorithm for read alignment. For reads longer than or equal to 100 bp, SeqAlto is up to 10 × faster than existing algorithms, while retaining high accuracy and the ability to align reads with large (up to 50 bp) indels. This improvement in efficiency is particularly important in the analysis of future sequencing data where the number of reads approaches many billions. Furthermore, SeqAlto uses less than 8 GB of memory to align against the human genome. SeqAlto is benchmarked against several existing tools with both real and simulated data. Linux and Mac OS X binaries free for academic use are available at http://www.stanford.edu/group/wonglab/seqalto whwong@stanford.edu.

  4. Accurate equilibrium structures for piperidine and cyclohexane.

    Science.gov (United States)

    Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter

    2015-03-05

    Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.

  5. Machine learning of accurate energy-conserving molecular force fields

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  6. Machine learning of accurate energy-conserving molecular force fields.

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof T; Müller, Klaus-Robert

    2017-05-01

    Using conservation of energy-a fundamental property of closed classical and quantum mechanical systems-we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol -1 for energies and 1 kcal mol -1 Å̊ -1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  7. Accurate and self-consistent procedure for determining pH in seawater desalination brines and its manifestation in reverse osmosis modeling.

    Science.gov (United States)

    Nir, Oded; Marvin, Esra; Lahav, Ori

    2014-11-01

    Measuring and modeling pH in concentrated aqueous solutions in an accurate and consistent manner is of paramount importance to many R&D and industrial applications, including RO desalination. Nevertheless, unified definitions and standard procedures have yet to be developed for solutions with ionic strength higher than ∼0.7 M, while implementation of conventional pH determination approaches may lead to significant errors. In this work a systematic yet simple methodology for measuring pH in concentrated solutions (dominated by Na(+)/Cl(-)) was developed and evaluated, with the aim of achieving consistency with the Pitzer ion-interaction approach. Results indicate that the addition of 0.75 M of NaCl to NIST buffers, followed by assigning a new standard pH (calculated based on the Pitzer approach), enabled reducing measured errors to below 0.03 pH units in seawater RO brines (ionic strength up to 2 M). To facilitate its use, the method was developed to be both conceptually and practically analogous to the conventional pH measurement procedure. The method was used to measure the pH of seawater RO retentates obtained at varying recovery ratios. The results matched better the pH values predicted by an accurate RO transport model. Calibrating the model by the measured pH values enabled better boron transport prediction. A Donnan-induced phenomenon, affecting pH in both retentate and permeate streams, was identified and quantified. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Blood urea nitrogen to serum creatinine ratio is an accurate predictor of outcome in diarrhea-associated hemolytic uremic syndrome, a preliminary study.

    Science.gov (United States)

    Keenswijk, Werner; Vanmassenhove, Jill; Raes, Ann; Dhont, Evelyn; Vande Walle, Johan

    2017-03-01

    Diarrhea-associated hemolytic uremic syndrome (D+HUS) is a common thrombotic microangiopathy during childhood and early identification of parameters predicting poor outcome could enable timely intervention. This study aims to establish the accuracy of BUN-to-serum creatinine ratio at admission, in addition to other parameters in predicting the clinical course and outcome. Records were searched for children between 1 January 2008 and 1 January 2015 admitted with D+HUS. A complicated course was defined as developing one or more of the following: neurological dysfunction, pancreatitis, cardiac or pulmonary involvement, hemodynamic instability, and hematologic complications while poor outcome was defined by death or development of chronic kidney disease. Thirty-four children were included from which 11 with a complicated disease course/poor outcome. Risk of a complicated course/poor outcome was strongly associated with oliguria (p = 0.000006) and hypertension (p = 0.00003) at presentation. In addition, higher serum creatinine (p = 0.000006) and sLDH (p = 0.02) with lower BUN-to-serum creatinine ratio (p = 0.000007) were significantly associated with development of complications. A BUN-to-sCreatinine ratio ≤40 at admission was a sensitive and highly specific predictor of a complicated disease course/poor outcome. A BUN-to-serum Creatinine ratio can accurately identify children with D+HUS at risk for a complicated course and poor outcome. What is Known: • Oliguria is a predictor of poor long-term outcome in D+HUS What is New: • BUN-to-serum Creatinine ratio at admission is an entirely novel and accurate predictor of poor outcome and complicated clinical outcome in D+HUS • Early detection of the high risk group in D+HUS enabling early treatment and adequate monitoring.

  9. Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output

    Energy Technology Data Exchange (ETDEWEB)

    Melin, Alexander M. [ORNL; Olama, Mohammed M. [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M. [ORNL; Zhang, Yichen [University of Tennessee, Knoxville (UTK), Department of Electrical Engineering and Computer Science

    2017-09-01

    The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed to estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.

  10. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. An accurate method for measuring triploidy of larval fish spawns

    Science.gov (United States)

    Jenkins, Jill A.; Draugelis-Dale, Rassa O.; Glennon, Robert; Kelly, Anita; Brown, Bonnie L.; Morrison, John

    2017-01-01

    A standard flow cytometric protocol was developed for estimating triploid induction in batches of larval fish. Polyploid induction treatments are not guaranteed to be 100% efficient, thus the ability to quantify the proportion of triploid larvae generated by a particular treatment helps managers to stock high-percentage spawns and researchers to select treatments for efficient triploid induction. At 3 d posthatch, individual Grass Carp Ctenopharyngodon idella were mechanically dissociated into single-cell suspensions; nuclear DNA was stained with propidium iodide then analyzed by flow cytometry. Following ploidy identification of individuals, aliquots of diploid and triploid cell suspensions were mixed to generate 15 levels (0–100%) of known triploidy (n = 10). Using either 20 or 50 larvae per level, the observed triploid percentages were lower than the known, actual values. Using nonlinear regression analyses, quadratic equations solved for triploid proportions in mixed samples and corresponding estimation reference plots allowed for predicting triploidy. Thus, an accurate prediction of the proportion of triploids in a spawn can be made by following a standard larval processing and analysis protocol with either 20 or 50 larvae from a single spawn, coupled with applying the quadratic equations or reference plots to observed flow cytometry results. Due to the universality of triploid DNA content being 1.5 times the diploid level and because triploid fish consist of fewer cells than diploids, this method should be applicable to other produced triploid fish species, and it may be adapted for use with bivalves or other species where batch analysis is appropriate.

  12. Accurate line intensities of methane from first-principles calculations

    Science.gov (United States)

    Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.

    2017-10-01

    In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.

  13. Predicting polymeric crystal structures by evolutionary algorithms

    Science.gov (United States)

    Zhu, Qiang; Sharma, Vinit; Oganov, Artem R.; Ramprasad, Ramamurthy

    2014-10-01

    The recently developed evolutionary algorithm USPEX proved to be a tool that enables accurate and reliable prediction of structures. Here we extend this method to predict the crystal structure of polymers by constrained evolutionary search, where each monomeric unit is treated as a building block with fixed connectivity. This greatly reduces the search space and allows the initial structure generation with different sequences and packings of these blocks. The new constrained evolutionary algorithm is successfully tested and validated on a diverse range of experimentally known polymers, namely, polyethylene, polyacetylene, poly(glycolic acid), poly(vinyl chloride), poly(oxymethylene), poly(phenylene oxide), and poly (p-phenylene sulfide). By fixing the orientation of polymeric chains, this method can be further extended to predict the structures of complex linear polymers, such as all polymorphs of poly(vinylidene fluoride), nylon-6 and cellulose. The excellent agreement between predicted crystal structures and experimentally known structures assures a major role of this approach in the efficient design of the future polymeric materials.

  14. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  15. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  16. Utility Energy Services Contracts: Enabling Documents

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-05-01

    Utility Energy Services Contracts: Enabling Documents provides materials that clarify the authority for Federal agencies to enter into utility energy services contracts (UESCs), as well as sample documents and resources to ease utility partnership contracting.

  17. Enabling Technology for Small Satellite Launch Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Access to space for Small Satellites is enabled by the use of excess launch capacity on existing launch vehicles. A range of sizes, form factors and masses of small...

  18. Intrusion Detection in Bluetooth Enabled Mobile Phones

    CSIR Research Space (South Africa)

    Nair, Kishor Krishnan

    2015-11-23

    Full Text Available Bluetooth plays a major role in expanding global spread of wireless technology. This predominantly happens through Bluetooth enabled mobile phones, which cover almost 60% of the Bluetooth market. Although Bluetooth mobile phones are equipped...

  19. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  20. Utility Energy Services Contracts: Enabling Documents

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Karen; Vasquez, Deb

    2017-01-01

    The Federal Energy Management Program's 'Utility Energy Service Contracts: Enabling Documents' provide legislative information and materials that clarify the authority for federal agencies to enter into utility energy service contracts, or UESCs.

  1. Taxonomy Enabled Discovery (TED), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposal addresses the NASA's need to enable scientific discovery and the topic's requirements for: processing large volumes of data, commonly available on the...

  2. Optical Coherent Receiver Enables THz Wireless Bridge

    DEFF Research Database (Denmark)

    Yu, Xianbin; Liu, Kexin; Zhang, Hangkai

    2016-01-01

    We experimentally demonstrated a 45 Gbit/s 400 GHz photonic wireless communication system enabled by an optical coherent receiver, which has a high potential in fast recovery of high data rate connections, for example, in disaster....

  3. Paradoxical Leadership to Enable Strategic Agility

    OpenAIRE

    Lewis, M. W.; Andriopoulos, C.; Smith, W. K.

    2014-01-01

    Strategic agility evokes contradictions, such as stability-flexibility, commitment-change, and established routines-novel approaches. These competing demands pose challenges that require paradoxical leadership—practices seeking creative, both/and solutions that can enable fast-paced, adaptable decision making. Why is managing paradox critical to strategic agility? And which practices enable leaders to effectively manage tensions? This article describes the paradoxical nature of strategic agil...

  4. Integrated Photonics Enabled by Slow Light

    DEFF Research Database (Denmark)

    Mørk, Jesper; Chen, Yuntian; Ek, Sara

    2012-01-01

    In this talk we will discuss the physics of slow light in semiconductor materials and in particular the possibilities offered for integrated photonics. This includes ultra-compact slow light enabled optical amplifiers, lasers and pulse sources.......In this talk we will discuss the physics of slow light in semiconductor materials and in particular the possibilities offered for integrated photonics. This includes ultra-compact slow light enabled optical amplifiers, lasers and pulse sources....

  5. Corollary Discharge and Oculomotor Proprioception: Cortical Mechanisms for Spatially Accurate Vision.

    Science.gov (United States)

    Sun, Linus D; Goldberg, Michael E

    2016-10-14

    A classic problem in psychology is understanding how the brain creates a stable and accurate representation of space for perception and action despite a constantly moving eye. Two mechanisms have been proposed to solve this problem: Herman von Helmholtz's idea that the brain uses a corollary discharge of the motor command that moves the eye to adjust the visual representation, and Sir Charles Sherrington's idea that the brain measures eye position to calculate a spatial representation. Here, we discuss the cognitive, neuropsychological, and physiological mechanisms that support each of these ideas. We propose that both are correct: A rapid corollary discharge signal remaps the visual representation before an impending saccade, computing accurate movement vectors; and an oculomotor proprioceptive signal enables the brain to construct a more accurate craniotopic representation of space that develops slowly after the saccade.

  6. Important Nearby Galaxies without Accurate Distances

    Science.gov (United States)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  7. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  8. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  9. How flatbed scanners upset accurate film dosimetry

    International Nuclear Information System (INIS)

    Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)

  10. Achieving Target Voriconazole Concentrations More Accurately in Children and Adolescents

    Science.gov (United States)

    Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, −17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was −0.7% (interquartile range, −7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, −1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was −6.7% (interquartile range, −18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, −13.1 to 18%; P = 0.32) and a dose bias of −3.5% (interquartile range, −18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.) PMID:25779580

  11. Achieving target voriconazole concentrations more accurately in children and adolescents.

    Science.gov (United States)

    Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, -17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was -0.7% (interquartile range, -7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, -1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was -6.7% (interquartile range, -18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, -13.1 to 18%; P = 0.32) and a dose bias of -3.5% (interquartile range, -18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.). Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  12. Spectral prediction of sediment chemistry in Lake Okeechobee, Florida.

    Science.gov (United States)

    Vogel, W Justin; Osborne, Todd Z; James, R Thomas; Cohen, Matthew J

    2016-10-01

    High-resolution diffuse reflectance spectra in the visible and near-infrared wavelengths were used to predict chemical properties of sediment samples obtained from Lake Okeechobee (FL, USA). Chemometric models yielded highly effective prediction (relative percent difference (RPD) = SD/RMSE >2) for some sediment properties including total magnesium (Mg), total calcium (Ca), total nitrogen (TN), total carbon (TC), and organic matter content (loss on ignition (LOI)). Predictions for iron (Fe), aluminum (Al), and various forms of phosphorus (total P (TP), HCl-extractable P (HCl-P), and KCl-extractable P (KCl-P)) were also sufficiently accurate (RPD > 1.5) to be considered useful; predictions for other P fractions as well as all pore water properties were poor. Notably, scanning wet sediments resulted in only a 7 % decline in RPD scores. Moreover, interpolation maps based on values predicted from wet sediment spectra captured the same spatial patterns for Ca, Mg, TC, TN, and TP as maps derived directly from wet chemistry, suggesting that field scanning of perpetually saturated sediments may be a viable option for expediting sample analysis and greatly reducing mapping costs. Indeed, the accuracy of spectral model predictions compared favorably with the accuracy of kriging model predictions derived from wet chemistry observations suggesting that, for some analytes, higher density spatial sampling enabled by use of field spectroscopy could increase the geographic accuracy of monitoring for changes in lake sediment chemical properties.

  13. On enabling secure applications through off-line biometric identification

    International Nuclear Information System (INIS)

    Davida, G.I.; Frankel, Y.; Matt, B.J.

    1998-04-01

    In developing secure applications and systems, the designers often must incorporate secure user identification in the design specification. In this paper, the authors study secure off line authenticated user identification schemes based on a biometric system that can measure a user's biometric accurately (up to some Hamming distance). The schemes presented here enhance identification and authorization in secure applications by binding a biometric template with authorization information on a token such as a magnetic strip. Also developed here are schemes specifically designed to minimize the compromise of a user's private biometrics data, encapsulated in the authorization information, without requiring secure hardware tokens. In this paper the authors furthermore study the feasibility of biometrics performing as an enabling technology for secure system and application design. The authors investigate a new technology which allows a user's biometrics to facilitate cryptographic mechanisms

  14. On enabling secure applications through off-line biometric identification

    Energy Technology Data Exchange (ETDEWEB)

    Davida, G.I. [Univ. of Wisconsin, Milwaukee, WI (United States); Frankel, Y. [CertCo LLC, New York, NY (United States); Matt, B.J. [Sandia National Labs., Albuquerque, NM (United States)

    1998-04-01

    In developing secure applications and systems, the designers often must incorporate secure user identification in the design specification. In this paper, the authors study secure off line authenticated user identification schemes based on a biometric system that can measure a user`s biometric accurately (up to some Hamming distance). The schemes presented here enhance identification and authorization in secure applications by binding a biometric template with authorization information on a token such as a magnetic strip. Also developed here are schemes specifically designed to minimize the compromise of a user`s private biometrics data, encapsulated in the authorization information, without requiring secure hardware tokens. In this paper the authors furthermore study the feasibility of biometrics performing as an enabling technology for secure system and application design. The authors investigate a new technology which allows a user`s biometrics to facilitate cryptographic mechanisms.

  15. Accurate anisotropic material modelling using only tensile tests for hot and cold forming

    Science.gov (United States)

    Abspoel, M.; Scholting, M. E.; Lansbergen, M.; Neelis, B. M.

    2017-09-01

    Accurate material data for simulations require a lot of effort. Advanced yield loci require many different kinds of tests and a Forming Limit Curve (FLC) needs a large amount of samples. Many people use simple material models to reduce the effort of testing, however some models are either not accurate enough (i.e. Hill’48), or do not describe new types of materials (i.e. Keeler). Advanced yield loci describe the anisotropic materials behaviour accurately, but are not widely adopted because of the specialized tests, and data post-processing is a hurdle for many. To overcome these issues, correlations between the advanced yield locus points (biaxial, plane strain and shear) and mechanical properties have been investigated. This resulted in accurate prediction of the advanced stress points using only Rm, Ag and r-values in three directions from which a Vegter yield locus can be constructed with low effort. FLC’s can be predicted with the equations of Abspoel & Scholting depending on total elongation A80, r-value and thickness. Both predictive methods are initially developed for steel, aluminium and stainless steel (BCC and FCC materials). The validity of the predicted Vegter yield locus is investigated with simulation and measurements on both hot and cold formed parts and compared with Hill’48. An adapted specimen geometry, to ensure a homogeneous temperature distribution in the Gleeble hot tensile test, was used to measure the mechanical properties needed to predict a hot Vegter yield locus. Since for hot material, testing of stress states other than uniaxial is really challenging, the prediction for the yield locus adds a lot of value. For the hot FLC an A80 sample with a homogeneous temperature distribution is needed which is due to size limitations not possible in the Gleeble tensile tester. Heating the sample in an industrial type furnace and tensile testing it in a dedicated device is a good alternative to determine the necessary parameters for the FLC

  16. Improved Bevirimat resistance prediction by combination of structural and sequence-based classifiers

    Directory of Open Access Journals (Sweden)

    Dybowski J Nikolaj

    2011-11-01

    Full Text Available Abstract Background Maturation inhibitors such as Bevirimat are a new class of antiretroviral drugs that hamper the cleavage of HIV-1 proteins into their functional active forms. They bind to these preproteins and inhibit their cleavage by the HIV-1 protease, resulting in non-functional virus particles. Nevertheless, there exist mutations in this region leading to resistance against Bevirimat. Highly specific and accurate tools to predict resistance to maturation inhibitors can help to identify patients, who might benefit from the usage of these new drugs. Results We tested several methods to improve Bevirimat resistance prediction in HIV-1. It turned out that combining structural and sequence-based information in classifier ensembles led to accurate and reliable predictions. Moreover, we were able to identify the most crucial regions for Bevirimat resistance computationally, which are in line with experimental results from other studies. Conclusions Our analysis demonstrated the use of machine learning techniques to predict HIV-1 resistance against maturation inhibitors such as Bevirimat. New maturation inhibitors are already under development and might enlarge the arsenal of antiretroviral drugs in the future. Thus, accurate prediction tools are very useful to enable a personalized therapy.

  17. An automated method for accurate vessel segmentation

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  18. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.

    2014-01-01

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  19. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  20. Predicted tyre-soil interface area and vertical stress distribution based on loading characteristics

    DEFF Research Database (Denmark)

    Schjønning, Per; Stettler, M.; Keller, Thomas

    2015-01-01

    The upper boundary condition for all models simulating stress patterns throughout the soil profile is the stress distribution at the tyre–soil interface. The so-called FRIDA model (Schjønning et al., 2008. Biosyst. Eng. 99, 119–133) treats the contact area as a superellipse and has been shown...... to accurately describe a range of observed vertical stress distributions. Previous research has indicated that such distributions may be predicted from tyre and loading characteristics. The objective of this study was to establish a stepwise calculation procedure enabling accurate predictions from readily...... the tyre’s ability to distribute the stress in the driving direction and in the transversal direction, respectively, increased with increases in the relevant contact area dimension (length or width). The α-parameter was further affected by FW, while Kr and L added to model performance for the β...

  1. SimPhospho: a software tool enabling confident phosphosite assignment.

    Science.gov (United States)

    Suni, Veronika; Suomi, Tomi; Tsubosaka, Tomoya; Imanishi, Susumu Y; Elo, Laura L; Corthals, Garry L

    2018-03-27

    Mass spectrometry combined with enrichment strategies for phosphorylated peptides has been successfully employed for two decades to identify sites of phosphorylation. However, unambiguous phosphosite assignment is considered challenging. Given that site-specific phosphorylation events function as different molecular switches, validation of phosphorylation sites is of utmost importance. In our earlier study we developed a method based on simulated phosphopeptide spectral libraries, which enables highly sensitive and accurate phosphosite assignments. To promote more widespread use of this method, we here introduce a software implementation with improved usability and performance. We present SimPhospho, a fast and user-friendly tool for accurate simulation of phosphopeptide tandem mass spectra. Simulated phosphopeptide spectral libraries are used to validate and supplement database search results, with a goal to improve reliable phosphoproteome identification and reporting. The presented program can be easily used together with the Trans-Proteomic Pipeline and integrated in a phosphoproteomics data analysis workflow. SimPhospho is available for Windows, Linux and Mac operating systems at https://sourceforge.net/projects/simphospho/. It is open source and implemented in C ++. A user's manual with detailed description of data analysis using SimPhospho as well as test data can be found as supplementary material of this article. Supplementary data are available at https://www.btk.fi/research/ computational-biomedicine/software/.

  2. Enabling Radiative Transfer on AMR grids in CRASH

    Science.gov (United States)

    Hariharan, N.; Graziani, L.; Ciardi, B.; Miniati, F.; Bungartz, H.-J.

    2017-05-01

    We introduce crash-amr, a new version of the cosmological radiative transfer (RT) code crash, enabled to use refined grids. This new feature allows us to attain higher resolution in our RT simulations and thus to describe more accurately ionization and temperature patterns in high-density regions. We have tested crash-amr by simulating the evolution of an ionized region produced by a single source embedded in gas at constant density, as well as by a more realistic configuration of multiple sources in an inhomogeneous density field. While we find an excellent agreement with the previous version of crash when the adaptive mesh refinement (AMR) feature is disabled, showing that no numerical artefact has been introduced in crash-amr, when additional refinement levels are used the code can simulate more accurately the physics of ionized gas in high-density regions. This result has been attained at no computational loss, as RT simulations on AMR grids with maximum resolution equivalent to that of a uniform Cartesian grid can be run with a gain of up to 60 per cent in computational time.

  3. Nanomaterial-Enabled Wearable Sensors for Healthcare.

    Science.gov (United States)

    Yao, Shanshan; Swetha, Puchakayala; Zhu, Yong

    2018-01-01

    Highly sensitive wearable sensors that can be conformably attached to human skin or integrated with textiles to monitor the physiological parameters of human body or the surrounding environment have garnered tremendous interest. Owing to the large surface area and outstanding material properties, nanomaterials are promising building blocks for wearable sensors. Recent advances in the nanomaterial-enabled wearable sensors including temperature, electrophysiological, strain, tactile, electrochemical, and environmental sensors are presented in this review. Integration of multiple sensors for multimodal sensing and integration with other components into wearable systems are summarized. Representative applications of nanomaterial-enabled wearable sensors for healthcare, including continuous health monitoring, daily and sports activity tracking, and multifunctional electronic skin are highlighted. Finally, challenges, opportunities, and future perspectives in the field of nanomaterial-enabled wearable sensors are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  5. Enabling Routes as Context in Mobile Services

    DEFF Research Database (Denmark)

    Brilingaite, Agne; Jensen, Christian Søndergaard; Zokaite, Nora

    With the continuing advances in wireless communications, geo-positioning, and portable electronics, an infrastructure is emerging that enables the delivery of on-line, location-enabled services to very large numbers of mobile users. A typical usage situation for mobile services is one characterized...... aware. Mobile users frequently follow the same route to a destination as they did during previous trips to the destination, and the route and destination constitute important aspects of the context for a range of services. This paper presents key concepts underlying a software component that identifies...

  6. Enabling Routes as Context in Mobile Services

    DEFF Research Database (Denmark)

    Brilingaite, Agne; Jensen, Christian Søndergaard; Zokaite, Nora

    2004-01-01

    With the continuing advances in wireless communications, geo-positioning, and portable electronics, an infrastructure is emerging that enables the delivery of on-line, location-enabled services to very large numbers of mobile users. A typical usage situation for mobile services is one characterized...... by a small screen and no keyboard, and by the service being only a secondary focus of the user. It is therefore particularly important to deliver the "right" information and service at the right time, with as little user interaction as possible. This may be achieved by making services context aware.Mobile...

  7. Enablers & Barriers for Realizing Modularity Benefits

    DEFF Research Database (Denmark)

    Storbjerg, Simon Haahr; Brunø, Thomas Ditlev; Thyssen, Jesper

    2012-01-01

    far less attention compared to the theories and methods concerning modularization of technical systems. Harvesting the full potential of modularization, particularly in relation to product development agility, depends on more than an optimal architecture. Key enablers in this context...... are the organizational and systems related aspects. Recognizing the need for guidance to realize the benefits of modularity, the purpose of this study is through a literature study and a case study to improve the insight into the organizational and systems related enablers and barriers with regard to obtaining the full...

  8. Origami-enabled deformable silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Rui; Huang, Hai; Liang, Hanshuang; Liang, Mengbing [School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona 85287 (United States); Tu, Hongen; Xu, Yong [Electrical and Computer Engineering, Wayne State University, 5050 Anthony Wayne Dr., Detroit, Michigan 48202 (United States); Song, Zeming; Jiang, Hanqing, E-mail: hanqing.jiang@asu.edu [School for Engineering of Matter, Transport and Energy, Arizona State University, Tempe, Arizona 85287 (United States); Yu, Hongyu, E-mail: hongyu.yu@asu.edu [School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona 85287 (United States); School of Earth and Space Exploration, Arizona State University, Tempe, Arizona 85287 (United States)

    2014-02-24

    Deformable electronics have found various applications and elastomeric materials have been widely used to reach flexibility and stretchability. In this Letter, we report an alternative approach to enable deformability through origami. In this approach, the deformability is achieved through folding and unfolding at the creases while the functional devices do not experience strain. We have demonstrated an example of origami-enabled silicon solar cells and showed that this solar cell can reach up to 644% areal compactness while maintaining reasonable good performance upon cyclic folding/unfolding. This approach opens an alternative direction of producing flexible, stretchable, and deformable electronics.

  9. Origami-enabled deformable silicon solar cells

    International Nuclear Information System (INIS)

    Tang, Rui; Huang, Hai; Liang, Hanshuang; Liang, Mengbing; Tu, Hongen; Xu, Yong; Song, Zeming; Jiang, Hanqing; Yu, Hongyu

    2014-01-01

    Deformable electronics have found various applications and elastomeric materials have been widely used to reach flexibility and stretchability. In this Letter, we report an alternative approach to enable deformability through origami. In this approach, the deformability is achieved through folding and unfolding at the creases while the functional devices do not experience strain. We have demonstrated an example of origami-enabled silicon solar cells and showed that this solar cell can reach up to 644% areal compactness while maintaining reasonable good performance upon cyclic folding/unfolding. This approach opens an alternative direction of producing flexible, stretchable, and deformable electronics

  10. Genomic prediction of starch content and chipping quality in tetraploid potato using genotyping-by-sequencing

    DEFF Research Database (Denmark)

    Sverrisdóttir, Elsa; Byrne, Stephen; Nielsen, Ea Høegh Riis

    2017-01-01

    continue to fall. In this study, we have generated genomic prediction models for starch content and chipping quality in tetraploid potato to facilitate varietal development. Chipping quality was evaluated as the colour of a potato chip after frying following cold induced sweetening. We used genotyping...... genomic estimated breeding values. Cross-validated prediction correlations of 0.56 and 0.73 were obtained within the training population for starch content and chipping quality, respectively, while correlations were lower when predicting performance in the test panel, at 0.30–0.31 and 0...... potato necessitates large training populations to efficiently capture the genetic diversity of elite potato germplasm and enable accurate prediction across the entire spectrum of elite potatoes. Nonetheless, our results demonstrate that GS is a promising breeding strategy for tetraploid potato....

  11. Robust, accurate and fast automatic segmentation of the spinal cord.

    Science.gov (United States)

    De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien

    2014-09-01

    Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  13. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    Science.gov (United States)

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  14. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  15. Development of an accurate 3D Monte Carlo broadband atmospheric radiative transfer model

    Science.gov (United States)

    Jones, Alexandra L.

    Radiation is the ultimate source of energy that drives our weather and climate. It is also the fundamental quantity detected by satellite sensors from which earth's properties are inferred. Radiative energy from the sun and emitted from the earth and atmosphere is redistributed by clouds in one of their most important roles in the atmosphere. Without accurately representing these interactions we greatly decrease our ability to successfully predict climate change, weather patterns, and to observe our environment from space. The remote sensing algorithms and dynamic models used to study and observe earth's atmosphere all parameterize radiative transfer with approximations that reduce or neglect horizontal variation of the radiation field, even in the presence of clouds. Despite having complete knowledge of the underlying physics at work, these approximations persist due to perceived computational expense. In the current context of high resolution modeling and remote sensing observations of clouds, from shallow cumulus to deep convective clouds, and given our ever advancing technological capabilities, these approximations have been exposed as inappropriate in many situations. This presents a need for accurate 3D spectral and broadband radiative transfer models to provide bounds on the interactions between clouds and radiation to judge the accuracy of similar but less expensive models and to aid in new parameterizations that take into account 3D effects when coupled to dynamic models of the atmosphere. Developing such a state of the art model based on the open source, object-oriented framework of the I3RC Monte Carlo Community Radiative Transfer ("IMC-original") Model is the task at hand. It has involved incorporating (1) thermal emission sources of radiation ("IMC+emission model"), allowing it to address remote sensing problems involving scattering of light emitted at earthly temperatures as well as spectral cooling rates, (2) spectral integration across an arbitrary

  16. Oxytonergic circuitry sustains and enables creative cognition in humans

    Science.gov (United States)

    Baas, Matthijs; Roskes, Marieke; Sligte, Daniel J.; Ebstein, Richard P.; Chew, Soo Hong; Tong, Terry; Jiang, Yushi; Mayseless, Naama; Shamay-Tsoory, Simone G.

    2014-01-01

    Creativity enables humans to adapt flexibly to changing circumstances, to manage complex social relations and to survive and prosper through social, technological and medical innovations. In humans, chronic, trait-based as well as temporary, state-based approach orientation has been linked to increased capacity for divergent rather than convergent thinking, to more global and holistic processing styles and to more original ideation and creative problem solving. Here, we link creative cognition to oxytocin, a hypothalamic neuropeptide known to up-regulate approach orientation in both animals and humans. Study 1 (N = 492) showed that plasma oxytocin predicts novelty-seeking temperament. Study 2 (N = 110) revealed that genotype differences in a polymorphism in the oxytocin receptor gene rs1042778 predicted creative ideation, with GG/GT-carriers being more original than TT-carriers. Using double-blind placebo-controlled between-subjects designs, Studies 3–6 (N = 191) finally showed that intranasal oxytocin (vs matching placebo) reduced analytical reasoning, and increased holistic processing, divergent thinking and creative performance. We conclude that the oxytonergic circuitry sustains and enables the day-to-day creativity humans need for survival and prosperity and discuss implications. PMID:23863476

  17. Improving prospects for digitally enabled livelihoods among ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Improving prospects for digitally enabled livelihoods among marginalized communities in Egypt. Egypt's limited economic growth and high ... Specifically, the project will develop the first high quality, localized, and partially Arabized curriculum focused on creating digital and business skills. This curriculum could eventually ...

  18. Enabling DRM-preserving Digital content Redistribution

    NARCIS (Netherlands)

    Krishnan Nair, S.; Popescu, B.C.; Gamage, C.D.; Crispo, B.; Tanenbaum, A.S.

    2005-01-01

    Traditionally, the process of online digital content distribution has involved a limited number of centralised distributors selling protected contents and licenses authorising the use of the se contents, to consumers. In this paper, we extend this model by introducing a security scheme that enables

  19. Creating an Economically Enabling and Competitive Business ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Creating an Economically Enabling and Competitive Business Environment in the West Bank and Gaza Strip. The prospect of indefinite Israeli occupation of the ... Impact of implementing the Palestinian banking law on the performance of the private sector [Arabic language]. Documents. Impact of the commercial agents law ...

  20. Creating an Economically Enabling and Competitive Business ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Creating an Economically Enabling and Competitive Business Environment in the West Bank and Gaza Strip. The prospect of indefinite Israeli occupation of the Palestinian territories, and their extreme dependence on foreign assistance and Israeli-controlled customs revenues, had led to the conclusion that the Palestinian ...

  1. Action Learning: Avoiding Conflict or Enabling Action

    Science.gov (United States)

    Corley, Aileen; Thorne, Ann

    2006-01-01

    Action learning is based on the premise that action and learning are inextricably entwined and it is this potential, to enable action, which has contributed to the growth of action learning within education and management development programmes. However has this growth in action learning lead to an evolution or a dilution of Revan's classical…

  2. Extreme Networks' 10-Gigabit Ethernet enables

    CERN Multimedia

    2002-01-01

    " Extreme Networks, Inc.'s 10-Gigabit switching platform enabled researchers to transfer one Terabyte of information from Vancouver to Geneva across a single network hop, the world's first large-scale, end-to-end transfer of its kind" (1/2 page).

  3. 75 FR 13235 - IP-Enabled Services

    Science.gov (United States)

    2010-03-19

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 63 [WC Docket No. 04-36; FCC 09-40] IP-Enabled Services AGENCY: Federal Communications Commission ACTION: Final rule; announcement of effective date... Internet Protocol (VoIP) service the discontinuance obligations that apply to domestic non-dominant...

  4. Nanotechnologv Enabled Biological and Chemical Sensors

    Science.gov (United States)

    Koehne, Jessica; Meyyappan, M.

    2011-01-01

    Nanotechnology is an enabling technology that will impact almost all economic sectors: one of the most important and with great potential is the health/medical sector. - Nanomaterials for drug delivery - Early warning sensors - Implantable devices - Artificial parts with improved characteristics Carbon nanotubes and nanofibers show promise for use in sensor development, electrodes and other biomedical applications.

  5. Accurate calculation of high harmonics generated by relativistic Thomson scattering

    International Nuclear Information System (INIS)

    Popa, Alexandru

    2008-01-01

    The recent emergence of the field of ultraintense laser pulses, corresponding to beam intensities higher than 10 18 W cm -2 , brings about the problem of the high harmonic generation (HHG) by the relativistic Thomson scattering of the electromagnetic radiation by free electrons. Starting from the equations of the relativistic motion of the electron in the electromagnetic field, we give an exact solution of this problem. Taking into account the Lienard-Wiechert equations, we obtain a periodic scattered electromagnetic field. Without loss of generality, the solution is strongly simplified by observing that the electromagnetic field is always normal to the direction electron-detector. The Fourier series expansion of this field leads to accurate expressions of the high harmonics generated by the Thomson scattering. Our calculations lead to a discrete HHG spectrum, whose shape and angular distribution are in agreement with the experimental data from the literature. Since no approximations were made, our approach is also valid in the ultrarelativistic regime, corresponding to intensities higher than 10 23 W cm -2 , where it predicts a strong increase of the HHG intensities and of the order of harmonics. In this domain, the nonlinear Thomson scattering could be an efficient source of hard x-rays

  6. Accurate determination of segmented X-ray detector geometry

    Science.gov (United States)

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  7. Fast and accurate quantum Monte Carlo for molecular crystals.

    Science.gov (United States)

    Zen, Andrea; Brandenburg, Jan Gerit; Klimeš, Jiří; Tkatchenko, Alexandre; Alfè, Dario; Michaelides, Angelos

    2018-02-20

    Computer simulation plays a central role in modern-day materials science. The utility of a given computational approach depends largely on the balance it provides between accuracy and computational cost. Molecular crystals are a class of materials of great technological importance which are challenging for even the most sophisticated ab initio electronic structure theories to accurately describe. This is partly because they are held together by a balance of weak intermolecular forces but also because the primitive cells of molecular crystals are often substantially larger than those of atomic solids. Here, we demonstrate that diffusion quantum Monte Carlo (DMC) delivers subchemical accuracy for a diverse set of molecular crystals at a surprisingly moderate computational cost. As such, we anticipate that DMC can play an important role in understanding and predicting the properties of a large number of molecular crystals, including those built from relatively large molecules which are far beyond reach of other high-accuracy methods. Copyright © 2018 the Author(s). Published by PNAS.

  8. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  9. PREDICT: Satellite tracking and orbital prediction

    Science.gov (United States)

    Magliacane, John A.

    2011-12-01

    PREDICT is an open-source, multi-user satellite tracking and orbital prediction program written under the Linux operating system. PREDICT provides real-time satellite tracking and orbital prediction information to users and client applications through: the system console the command line a network socket the generation of audio speechData such as a spacecraft's sub-satellite point, azimuth and elevation headings, Doppler shift, path loss, slant range, orbital altitude, orbital velocity, footprint diameter, orbital phase (mean anomaly), squint angle, eclipse depth, the time and date of the next AOS (or LOS of the current pass), orbit number, and sunlight and visibility information are provided on a real-time basis. PREDICT can also track (or predict the position of) the Sun and Moon. PREDICT has the ability to control AZ/EL antenna rotators to maintain accurate orientation in the direction of communication satellites. As an aid in locating and tracking satellites through optical means, PREDICT can articulate tracking coordinates and visibility information as plain speech.

  10. Enabling phenotypic big data with PheNorm.

    Science.gov (United States)

    Yu, Sheng; Ma, Yumeng; Gronsbell, Jessica; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Liao, Katherine P; Cai, Tianxi

    2018-01-01

    Electronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training. The most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for