WorldWideScience

Sample records for chip-seq accurately predicts

  1. How accurate can genetic predictions be?

    Directory of Open Access Journals (Sweden)

    Dreyfuss Jonathan M

    2012-07-01

    Full Text Available Abstract Background Pre-symptomatic prediction of disease and drug response based on genetic testing is a critical component of personalized medicine. Previous work has demonstrated that the predictive capacity of genetic testing is constrained by the heritability and prevalence of the tested trait, although these constraints have only been approximated under the assumption of a normally distributed genetic risk distribution. Results Here, we mathematically derive the absolute limits that these factors impose on test accuracy in the absence of any distributional assumptions on risk. We present these limits in terms of the best-case receiver-operating characteristic (ROC curve, consisting of the best-case test sensitivities and specificities, and the AUC (area under the curve measure of accuracy. We apply our method to genetic prediction of type 2 diabetes and breast cancer, and we additionally show the best possible accuracy that can be obtained from integrated predictors, which can incorporate non-genetic features. Conclusion Knowledge of such limits is valuable in understanding the implications of genetic testing even before additional associations are identified.

  2. Customised birthweight standards accurately predict perinatal morbidity

    Science.gov (United States)

    Figueras, Francesc; Figueras, Josep; Meler, Eva; Eixarch, Elisenda; Coll, Oriol; Gratacos, Eduard; Gardosi, Jason; Carbonell, Xavier

    2007-01-01

    Objective Fetal growth restriction is associated with adverse perinatal outcome but is often not recognised antenatally, and low birthweight centiles based on population norms are used as a proxy instead. This study compared the association between neonatal morbidity and fetal growth status at birth as determined by customised birthweight centiles and currently used centiles based on population standards. Design Retrospective cohort study. Setting Referral hospital, Barcelona, Spain. Patients A cohort of 13 661 non‐malformed singleton deliveries. Interventions Both population‐based and customised standards for birth weight were applied to the study cohort. Customised weight centiles were calculated by adjusting for maternal height, booking weight, parity, ethnic origin, gestational age at delivery and fetal sex. Main outcome measures Newborn morbidity and perinatal death. Results The association between smallness for gestational age (SGA) and perinatal morbidity was stronger when birthweight limits were customised, and resulted in an additional 4.1% (n = 565) neonates being classified as SGA. Compared with non‐SGA neonates, this newly identified group had an increased risk of perinatal mortality (OR 3.2; 95% CI 1.6 to 6.2), neurological morbidity (OR 3.2; 95% CI 1.7 to 6.1) and non‐neurological morbidity (OR 8; 95% CI 4.8 to 13.6). Conclusion Customised standards improve the prediction of adverse neonatal outcome. The association between SGA and adverse outcome is independent of the gestational age at delivery. PMID:17251224

  3. Accurate torque-speed performance prediction for brushless dc motors

    Science.gov (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  4. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  5. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    Science.gov (United States)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  6. Accurate prediction of secondary metabolite gene clusters in filamentous fungi.

    Science.gov (United States)

    Andersen, Mikael R; Nielsen, Jakob B; Klitgaard, Andreas; Petersen, Lene M; Zachariasen, Mia; Hansen, Tilde J; Blicher, Lene H; Gotfredsen, Charlotte H; Larsen, Thomas O; Nielsen, Kristian F; Mortensen, Uffe H

    2013-01-02

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross-chemistry between physically separate gene clusters (superclusters), and validate this both with legacy data and experimentally by prediction and verification of a supercluster consisting of the synthase AN1242 and the prenyltransferase AN11080, as well as identification of the product compound nidulanin A. We have used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom.

  7. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  8. Artificial neural network accurately predicts hepatitis B surface antigen seroclearance.

    Directory of Open Access Journals (Sweden)

    Ming-Hua Zheng

    Full Text Available BACKGROUND & AIMS: Hepatitis B surface antigen (HBsAg seroclearance and seroconversion are regarded as favorable outcomes of chronic hepatitis B (CHB. This study aimed to develop artificial neural networks (ANNs that could accurately predict HBsAg seroclearance or seroconversion on the basis of available serum variables. METHODS: Data from 203 untreated, HBeAg-negative CHB patients with spontaneous HBsAg seroclearance (63 with HBsAg seroconversion, and 203 age- and sex-matched HBeAg-negative controls were analyzed. ANNs and logistic regression models (LRMs were built and tested according to HBsAg seroclearance and seroconversion. Predictive accuracy was assessed with area under the receiver operating characteristic curve (AUROC. RESULTS: Serum quantitative HBsAg (qHBsAg and HBV DNA levels, qHBsAg and HBV DNA reduction were related to HBsAg seroclearance (P<0.001 and were used for ANN/LRM-HBsAg seroclearance building, whereas, qHBsAg reduction was not associated with ANN-HBsAg seroconversion (P = 0.197 and LRM-HBsAg seroconversion was solely based on qHBsAg (P = 0.01. For HBsAg seroclearance, AUROCs of ANN were 0.96, 0.93 and 0.95 for the training, testing and genotype B subgroups respectively. They were significantly higher than those of LRM, qHBsAg and HBV DNA (all P<0.05. Although the performance of ANN-HBsAg seroconversion (AUROC 0.757 was inferior to that for HBsAg seroclearance, it tended to be better than those of LRM, qHBsAg and HBV DNA. CONCLUSIONS: ANN identifies spontaneous HBsAg seroclearance in HBeAg-negative CHB patients with better accuracy, on the basis of easily available serum data. More useful predictors for HBsAg seroconversion are still needed to be explored in the future.

  9. Fast and accurate predictions of covalent bonds in chemical space

    Science.gov (United States)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  10. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    Directory of Open Access Journals (Sweden)

    Xin Deng

    2015-07-01

    Full Text Available Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.

  11. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    Science.gov (United States)

    Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias

    2016-01-01

    Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516

  12. Is Three-Dimensional Soft Tissue Prediction by Software Accurate?

    Science.gov (United States)

    Nam, Ki-Uk; Hong, Jongrak

    2015-11-01

    The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions.

  13. Fast and accurate automatic structure prediction with HHpred.

    Science.gov (United States)

    Hildebrand, Andrea; Remmert, Michael; Biegert, Andreas; Söding, Johannes

    2009-01-01

    Automated protein structure prediction is becoming a mainstream tool for biological research. This has been fueled by steady improvements of publicly available automated servers over the last decade, in particular their ability to build good homology models for an increasing number of targets by reliably detecting and aligning more and more remotely homologous templates. Here, we describe the three fully automated versions of the HHpred server that participated in the community-wide blind protein structure prediction competition CASP8. What makes HHpred unique is the combination of usability, short response times (typically under 15 min) and a model accuracy that is competitive with those of the best servers in CASP8.

  14. Predicting accurate absolute binding energies in aqueous solution

    DEFF Research Database (Denmark)

    Jensen, Jan Halborg

    2015-01-01

    Recent predictions of absolute binding free energies of host-guest complexes in aqueous solution using electronic structure theory have been encouraging for some systems, while other systems remain problematic. In this paper I summarize some of the many factors that could easily contribute 1-3 kcal......-represented by continuum models. While I focus on binding free energies in aqueous solution the approach also applies (with minor adjustments) to any free energy difference such as conformational or reaction free energy differences or activation free energies in any solvent....

  15. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    DEFF Research Database (Denmark)

    Westhall, Erik; Rossetti, Andrea O; van Rootselaar, Anne-Fleur;

    2016-01-01

    OBJECTIVE: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. METHODS: In this cohort study, 4 EEG specialists...... patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present...

  16. Objective criteria accurately predict amputation following lower extremity trauma.

    Science.gov (United States)

    Johansen, K; Daines, M; Howey, T; Helfet, D; Hansen, S T

    1990-05-01

    MESS (Mangled Extremity Severity Score) is a simple rating scale for lower extremity trauma, based on skeletal/soft-tissue damage, limb ischemia, shock, and age. Retrospective analysis of severe lower extremity injuries in 25 trauma victims demonstrated a significant difference between MESS values for 17 limbs ultimately salvaged (mean, 4.88 +/- 0.27) and nine requiring amputation (mean, 9.11 +/- 0.51) (p less than 0.01). A prospective trial of MESS in lower extremity injuries managed at two trauma centers again demonstrated a significant difference between MESS values of 14 salvaged (mean, 4.00 +/- 0.28) and 12 doomed (mean, 8.83 +/- 0.53) limbs (p less than 0.01). In both the retrospective survey and the prospective trial, a MESS value greater than or equal to 7 predicted amputation with 100% accuracy. MESS may be useful in selecting trauma victims whose irretrievably injured lower extremities warrant primary amputation.

  17. Predicting accurate absolute binding energies in aqueous solution

    DEFF Research Database (Denmark)

    Jensen, Jan Halborg

    2015-01-01

    Recent predictions of absolute binding free energies of host-guest complexes in aqueous solution using electronic structure theory have been encouraging for some systems, while other systems remain problematic. In this paper I summarize some of the many factors that could easily contribute 1-3 kcal...... mol(-1) errors at 298 K: three-body dispersion effects, molecular symmetry, anharmonicity, spurious imaginary frequencies, insufficient conformational sampling, wrong or changing ionization states, errors in the solvation free energy of ions, and explicit solvent (and ion) effects that are not well......-represented by continuum models. While I focus on binding free energies in aqueous solution the approach also applies (with minor adjustments) to any free energy difference such as conformational or reaction free energy differences or activation free energies in any solvent....

  18. An accurate empirical correlation for predicting natural gas viscosity

    Institute of Scientific and Technical Information of China (English)

    Ehsan Sanjari; Ebrahim Nemati Lay; Mohammad Peymani

    2011-01-01

    Natural gas viscosity is an important parameter in many gas and petroleum engineering calculations.This study presents a new empirical model for quickly calculating the natural gas viscosity.The model was derived from 4089 experimental viscosity data with varieties ranging from 0.01to 21,and 1 to 3 of pseudo reduced pressure and temperature,respectively.The accuracy of this new empirical correlation has been compared with commonly used empirical models,including Lee et al.,Heidaryan et al.,Carr et al.,and Adel Elsharkawy correlations.The comparison indicates that this new empirical model can predict viscosity of natural gas with average absolute relative deviation percentage AARD (%) of 2.173.

  19. Accurate theoretical prediction on positron lifetime of bulk materials

    CERN Document Server

    Zhang, Wenshuai; Liu, Jiandang; Ye, Bangjiao

    2015-01-01

    Based on the first-principles calculations, we perform an initiatory statistical assessment on the reliability level of theoretical positron lifetime of bulk material. We found the original generalized gradient approximation (GGA) form of the enhancement factor and correlation potentials overestimates the effect of the gradient factor. Furthermore, an excellent agreement between model and data with the difference being the noise level of the data is found in this work. In addition, we suggest a new GGA form of the correlation scheme which gives the best performance. This work demonstrates that a brand-new reliability level is achieved for the theoretical prediction on positron lifetime of bulk material and the accuracy of the best theoretical scheme can be independent on the type of materials.

  20. Change in BMI accurately predicted by social exposure to acquaintances.

    Directory of Open Access Journals (Sweden)

    Rahman O Oloritun

    Full Text Available Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC and R(2. This study found a model that explains 68% (p<0.0001 of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as

  1. Accurate Prediction of One-Dimensional Protein Structure Features Using SPINE-X.

    Science.gov (United States)

    Faraggi, Eshel; Kloczkowski, Andrzej

    2017-01-01

    Accurate prediction of protein secondary structure and other one-dimensional structure features is essential for accurate sequence alignment, three-dimensional structure modeling, and function prediction. SPINE-X is a software package to predict secondary structure as well as accessible surface area and dihedral angles ϕ and ψ. For secondary structure SPINE-X achieves an accuracy of between 81 and 84 % depending on the dataset and choice of tests. The Pearson correlation coefficient for accessible surface area prediction is 0.75 and the mean absolute error from the ϕ and ψ dihedral angles are 20(∘) and 33(∘), respectively. The source code and a Linux executables for SPINE-X are available from Research and Information Systems at http://mamiris.com .

  2. Mind-set and close relationships: when bias leads to (In)accurate predictions.

    Science.gov (United States)

    Gagné, F M; Lydon, J E

    2001-07-01

    The authors investigated whether mind-set influences the accuracy of relationship predictions. Because people are more biased in their information processing when thinking about implementing an important goal, relationship predictions made in an implemental mind-set were expected to be less accurate than those made in a more impartial deliberative mind-set. In Study 1, open-ended thoughts of students about to leave for university were coded for mind-set. In Study 2, mind-set about a major life goal was assessed using a self-report measure. In Study 3, mind-set was experimentally manipulated. Overall, mind-set interacted with forecasts to predict relationship survival. Forecasts were more accurate in a deliberative mind-set than in an implemental mind-set. This effect was more pronounced for long-term than for short-term relationship survival. Finally, deliberatives were not pessimistic; implementals were unduly optimistic.

  3. SCPRED: Accurate prediction of protein structural class for sequences of twilight-zone similarity with predicting sequences

    Directory of Open Access Journals (Sweden)

    Chen Ke

    2008-05-01

    Full Text Available Abstract Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is

  4. Rapid yet accurate first principle based predictions of alkali halide crystal phases using alchemical perturbation

    CERN Document Server

    Solovyeva, Alisa

    2016-01-01

    We assess the predictive power of alchemical perturbations for estimating fundamental properties in ionic crystals. Using density functional theory we have calculated formation energies, lattice constants, and bulk moduli for all sixteen iso-valence-electronic combinations of pure pristine alkali halides involving elements $A \\in \\{$Na, K, Rb, Cs$\\}$ and $X \\in \\{$F, Cl, Br, I$\\}$. For rock salt, zincblende and cesium chloride symmetry, alchemical Hellmann-Feynman derivatives, evaluated along lattice scans of sixteen reference crystals, have been obtained for all respective 16$\\times$15 combinations of reference and predicted target crystals. Mean absolute errors (MAE) are on par with density functional theory level of accuracy for energies and bulk modulus. Predicted lattice constants are less accurate. NaCl is the best reference salt for alchemical estimates of relative energies (MAE $<$ 40 meV/atom) while alkali fluorides are the worst. By contrast, lattice constants are predicted best using NaF as a re...

  5. A New Method for Accurate Prediction of Ship’s Inertial Stopping Distance

    Directory of Open Access Journals (Sweden)

    Langxiong Gan

    2013-10-01

    Full Text Available This study aims to research the prediction of ship’s inertial stopping distance. Accurate prediction of a ship’s inertial stopping distance helps the duty officers to make the collision avoidance decisions effectively. In this study ship’s inertial stopping distance is calculated using the ALE (Arbitrary Lagrangian Eulerian algorithm implemented in the FLUENT code. Firstly, a method for predicting the inertial stopping distance of a floating body based on the FLUENT code is established. Then, the results calculated by the method are compared with those obtained from the empirical formulae and the physical model tests. The comparison result indicates that the proposed method is robust and can be used effectively to predict the ship’s inertial stopping distance.

  6. PlantLoc: an accurate web server for predicting plant protein subcellular localization by substantiality motif

    OpenAIRE

    Tang, Shengnan; Li, Tonghua; Cong, Peisheng; Xiong, Wenwei; Wang, Zhiheng; Sun, Jiangming

    2013-01-01

    Knowledge of subcellular localizations (SCLs) of plant proteins relates to their functions and aids in understanding the regulation of biological processes at the cellular level. We present PlantLoc, a highly accurate and fast webserver for predicting the multi-label SCLs of plant proteins. The PlantLoc server has two innovative characters: building localization motif libraries by a recursive method without alignment and Gene Ontology information; and establishing simple architecture for rapi...

  7. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    Science.gov (United States)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2016-12-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  8. A Single Linear Prediction Filter that Accurately Predicts the AL Index

    Science.gov (United States)

    McPherron, R. L.; Chu, X.

    2015-12-01

    The AL index is a measure of the strength of the westward electrojet flowing along the auroral oval. It has two components: one from the global DP-2 current system and a second from the DP-1 current that is more localized near midnight. It is generally believed that the index a very poor measure of these currents because of its dependence on the distance of stations from the source of the two currents. In fact over season and solar cycle the coupling strength defined as the steady state ratio of the output AL to the input coupling function varies by a factor of four. There are four factors that lead to this variation. First is the equinoctial effect that modulates coupling strength with peaks (strongest coupling) at the equinoxes. Second is the saturation of the polar cap potential which decreases coupling strength as the strength of the driver increases. Since saturation occurs more frequently at solar maximum we obtain the result that maximum coupling strength occurs at equinox at solar minimum. A third factor is ionospheric conductivity with stronger coupling at summer solstice as compared to winter. The fourth factor is the definition of a solar wind coupling function appropriate to a given index. We have developed an optimum coupling function depending on solar wind speed, density, transverse magnetic field, and IMF clock angle which is better than previous functions. Using this we have determined the seasonal variation of coupling strength and developed an inverse function that modulates the optimum coupling function so that all seasonal variation is removed. In a similar manner we have determined the dependence of coupling strength on solar wind driver strength. The inverse of this function is used to scale a linear prediction filter thus eliminating the dependence on driver strength. Our result is a single linear filter that is adjusted in a nonlinear manner by driver strength and an optimum coupling function that is seasonal modulated. Together this

  9. Accurately Estimating the State of a Geophysical System with Sparse Observations: Predicting the Weather

    CERN Document Server

    An, Zhe; Abarbanel, Henry D I

    2014-01-01

    Utilizing the information in observations of a complex system to make accurate predictions through a quantitative model when observations are completed at time $T$, requires an accurate estimate of the full state of the model at time $T$. When the number of measurements $L$ at each observation time within the observation window is larger than a sufficient minimum value $L_s$, the impediments in the estimation procedure are removed. As the number of available observations is typically such that $L \\ll L_s$, additional information from the observations must be presented to the model. We show how, using the time delays of the measurements at each observation time, one can augment the information transferred from the data to the model, removing the impediments to accurate estimation and permitting dependable prediction. We do this in a core geophysical fluid dynamics model, the shallow water equations, at the heart of numerical weather prediction. The method is quite general, however, and can be utilized in the a...

  10. More accurate recombination prediction in HIV-1 using a robust decoding algorithm for HMMs

    Directory of Open Access Journals (Sweden)

    Brown Daniel G

    2011-05-01

    Full Text Available Abstract Background Identifying recombinations in HIV is important for studying the epidemiology of the virus and aids in the design of potential vaccines and treatments. The previous widely-used tool for this task uses the Viterbi algorithm in a hidden Markov model to model recombinant sequences. Results We apply a new decoding algorithm for this HMM that improves prediction accuracy. Exactly locating breakpoints is usually impossible, since different subtypes are highly conserved in some sequence regions. Our algorithm identifies these sites up to a certain error tolerance. Our new algorithm is more accurate in predicting the location of recombination breakpoints. Our implementation of the algorithm is available at http://www.cs.uwaterloo.ca/~jmtruszk/jphmm_balls.tar.gz. Conclusions By explicitly accounting for uncertainty in breakpoint positions, our algorithm offers more reliable predictions of recombination breakpoints in HIV-1. We also document a new domain of use for our new decoding approach in HMMs.

  11. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-01-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate...

  12. Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction

    Science.gov (United States)

    Junkin, Gary

    2013-04-01

    With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.

  13. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Energy Technology Data Exchange (ETDEWEB)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  14. LogGPO: An accurate communication model for performance prediction of MPI programs

    Institute of Scientific and Technical Information of China (English)

    CHEN WenGuang; ZHAI JiDong; ZHANG Jin; ZHENG WeiMin

    2009-01-01

    Message passing interface (MPI) is the de facto standard in writing parallel scientific applications on distributed memory systems. Performance prediction of MPI programs on current or future parallel sys-terns can help to find system bottleneck or optimize programs. To effectively analyze and predict per-formance of a large and complex MPI program, an efficient and accurate communication model is highly needed. A series of communication models have been proposed, such as the LogP model family, which assume that the sending overhead, message transmission, and receiving overhead of a communication is not overlapped and there is a maximum overlap degree between computation and communication. However, this assumption does not always hold for MPI programs because either sending or receiving overhead introduced by MPI implementations can decrease potential overlap for large messages. In this paper, we present a new communication model, named LogGPO, which captures the potential overlap between computation with communication of MPI programs. We design and implement a trace-driven simulator to verify the LogGPO model by predicting performance of point-to-point communication and two real applications CG and Sweep3D. The average prediction errors of LogGPO model are 2.4% and 2.0% for these two applications respectively, while the average prediction errors of LogGP model are 38.3% and 9.1% respectively.

  15. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  16. The MIDAS touch for Accurately Predicting the Stress-Strain Behavior of Tantalum

    Energy Technology Data Exchange (ETDEWEB)

    Jorgensen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-02

    Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].

  17. An accurate and efficient numerical framework for adaptive numerical weather prediction

    CERN Document Server

    Tumolo, G

    2014-01-01

    We present an accurate and efficient discretization approach for the adaptive discretization of typical model equations employed in numerical weather prediction. A semi-Lagrangian approach is combined with the TR-BDF2 semi-implicit time discretization method and with a spatial discretization based on adaptive discontinuous finite elements. The resulting method has full second order accuracy in time and can employ polynomial bases of arbitrarily high degree in space, is unconditionally stable and can effectively adapt the number of degrees of freedom employed in each element, in order to balance accuracy and computational cost. The p-adaptivity approach employed does not require remeshing, therefore it is especially suitable for applications, such as numerical weather prediction, in which a large number of physical quantities are associated with a given mesh. Furthermore, although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes, even its application on simple Cartesian...

  18. Physical modeling of real-world slingshots for accurate speed predictions

    CERN Document Server

    Yeats, Bob

    2016-01-01

    We discuss the physics and modeling of latex-rubber slingshots. The goal is to get accurate speed predictions inspite of the significant real world difficulties of force drift, force hysteresis, rubber ageing, and the very non- linear, non-ideal, force vs. pull distance curves of slingshot rubber bands. Slingshots are known to shoot faster under some circumstances when the bands are tapered rather than having constant width and stiffness. We give both qualitative understanding and numerical predictions of this effect. We consider two models. The first is based on conservation of energy and is easier to implement, but cannot determine the speeds along the rubber bands without making assumptions. The second, treats the bands as a series of mass points subject to being pulled by immediately adjacent mass points according to how much the rubber has been stretched on the two adjacent sides. This is a classic many-body F=ma problem but convergence requires using a particular numerical technique. It gives accurate p...

  19. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction

    Science.gov (United States)

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K.; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G.; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H.

    2017-01-01

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. PMID:27899623

  20. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction.

    Science.gov (United States)

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H

    2017-01-09

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively.

  1. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    Science.gov (United States)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts

  2. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    Science.gov (United States)

    Li, Zhen; Zhang, Renyu

    2017-01-01

    Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact

  3. Genetic crossovers are predicted accurately by the computed human recombination map.

    Directory of Open Access Journals (Sweden)

    Pavel P Khil

    2010-01-01

    Full Text Available Hotspots of meiotic recombination can change rapidly over time. This instability and the reported high level of inter-individual variation in meiotic recombination puts in question the accuracy of the calculated hotspot map, which is based on the summation of past genetic crossovers. To estimate the accuracy of the computed recombination rate map, we have mapped genetic crossovers to a median resolution of 70 Kb in 10 CEPH pedigrees. We then compared the positions of crossovers with the hotspots computed from HapMap data and performed extensive computer simulations to compare the observed distributions of crossovers with the distributions expected from the calculated recombination rate maps. Here we show that a population-averaged hotspot map computed from linkage disequilibrium data predicts well present-day genetic crossovers. We find that computed hotspot maps accurately estimate both the strength and the position of meiotic hotspots. An in-depth examination of not-predicted crossovers shows that they are preferentially located in regions where hotspots are found in other populations. In summary, we find that by combining several computed population-specific maps we can capture the variation in individual hotspots to generate a hotspot map that can predict almost all present-day genetic crossovers.

  4. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    Energy Technology Data Exchange (ETDEWEB)

    Shvab, I.; Sadus, Richard J., E-mail: rsadus@swin.edu.au [Centre for Molecular Simulation, Swinburne University of Technology, PO Box 218, Hawthorn, Victoria 3122 (Australia)

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  5. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    Science.gov (United States)

    Shvab, I.; Sadus, Richard J.

    2013-11-01

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm3 for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  6. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  7. Accurate Mobility Modeling and Location Prediction Based on Pattern Analysis of Handover Series in Mobile Networks

    Directory of Open Access Journals (Sweden)

    Péter Fülöp

    2009-01-01

    Full Text Available The efficient dimensioning of cellular wireless access networks depends highly on the accuracy of the underlying mathematical models of user distribution and traffic estimations. Mobility prediction also considered as an effective method contributing to the accuracy of IP multicast based multimedia transmissions, and ad hoc routing algorithms. In this paper we focus on the tradeoff between the accuracy and the complexity of the mathematical models used to describe user movements in the network. We propose mobility model extension, in order to utilize user's movement history thus providing more accurate results than other widely used models in the literature. The new models are applicable in real-life scenarios, because these rely on additional information effectively available in cellular networks (e.g. handover history, too. The complexity of the proposed models is analyzed, and the accuracy is justified by means of simulation.

  8. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    CERN Document Server

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-01-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  9. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    Science.gov (United States)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  10. Mass transport and direction dependent battery modeling for accurate on-line power capability prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wiegman, H.L.N. [General Electric Corporate Research and Development, Schenectady, NY (United States)

    2000-07-01

    Some recent advances in battery modeling were discussed with reference to on-line impedance estimates and power performance predictions for aqueous solution, porous electrode cell structures. The objective was to determine which methods accurately estimate a battery's internal state and power capability while operating a charge and sustaining a hybrid electric vehicle (HEV) over a wide range of driving conditions. The enhancements to the Randles-Ershler equivalent electrical model of common cells with lead-acid, nickel-cadmium and nickel-metal hydride chemistries were described. This study also investigated which impedances are sensitive to boundary layer charge concentrations and mass transport limitations. Non-linear impedances were shown to significantly affect the battery's ability to process power. The main advantage of on-line estimating a battery's impedance state and power capability is that the battery can be optimally sized for any application. refs., tabs., figs., append.

  11. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Matthew Almond Sochor

    2014-07-01

    Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.

  12. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    Science.gov (United States)

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  13. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  14. A machine learned classifier that uses gene expression data to accurately predict estrogen receptor status.

    Directory of Open Access Journals (Sweden)

    Meysam Bastani

    Full Text Available BACKGROUND: Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. METHODS: To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. RESULTS: This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. CONCLUSIONS: Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions.

  15. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    Directory of Open Access Journals (Sweden)

    Zheng-Wei Li

    2016-08-01

    Full Text Available Protein-protein interactions (PPIs occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research.

  16. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    Science.gov (United States)

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research.

  17. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  18. Accurate load prediction by BEM with airfoil data from 3D RANS simulations

    Science.gov (United States)

    Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger

    2016-09-01

    In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.

  19. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    Energy Technology Data Exchange (ETDEWEB)

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  20. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Sharon [National University of Singapore, Yong Loo Lin School of Medicine (Singapore); Back, Michael [Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales (Australia); Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun [National University, Cancer Institute, Department of Radiation Oncology, National University, Hospital, Tower Block (Singapore); Lu, Jaide Jay, E-mail: mdcljj@nus.edu.sg [National University of Singapore, Yong Loo Lin School of Medicine (Singapore); National University, Cancer Institute, Department of Radiation Oncology, National University, Hospital, Tower Block (Singapore)

    2012-07-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  1. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    Science.gov (United States)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  2. Towards accurate cosmological predictions for rapidly oscillating scalar fields as dark matter

    CERN Document Server

    Ureña-López, L Arturo

    2015-01-01

    As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. We apply the method to a scalar field endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is shown that the Jeans wavenumber defined as $k_J = a \\sqrt{mH}$ is directly related to the suppression of linear perturbations at wavenumbers $k>k_J$. We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in t...

  3. Accurate prediction of DnaK-peptide binding via homology modelling and experimental data.

    Directory of Open Access Journals (Sweden)

    Joost Van Durme

    2009-08-01

    Full Text Available Molecular chaperones are essential elements of the protein quality control machinery that governs translocation and folding of nascent polypeptides, refolding and degradation of misfolded proteins, and activation of a wide range of client proteins. The prokaryotic heat-shock protein DnaK is the E. coli representative of the ubiquitous Hsp70 family, which specializes in the binding of exposed hydrophobic regions in unfolded polypeptides. Accurate prediction of DnaK binding sites in E. coli proteins is an essential prerequisite to understand the precise function of this chaperone and the properties of its substrate proteins. In order to map DnaK binding sites in protein sequences, we have developed an algorithm that combines sequence information from peptide binding experiments and structural parameters from homology modelling. We show that this combination significantly outperforms either single approach. The final predictor had a Matthews correlation coefficient (MCC of 0.819 when assessed over the 144 tested peptide sequences to detect true positives and true negatives. To test the robustness of the learning set, we have conducted a simulated cross-validation, where we omit sequences from the learning sets and calculate the rate of repredicting them. This resulted in a surprisingly good MCC of 0.703. The algorithm was also able to perform equally well on a blind test set of binders and non-binders, of which there was no prior knowledge in the learning sets. The algorithm is freely available at http://limbo.vib.be.

  4. Cluster abundance in chameleon f(R) gravity I: toward an accurate halo mass function prediction

    Science.gov (United States)

    Cataneo, Matteo; Rapetti, David; Lombriser, Lucas; Li, Baojiu

    2016-12-01

    We refine the mass and environment dependent spherical collapse model of chameleon f(R) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f(R) halo abundance with respect to that of General Relativity (GR) within a precision of lesssim 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f(R) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.

  5. Accurate prediction of band gaps and optical properties of HfO2

    Science.gov (United States)

    Ondračka, Pavel; Holec, David; Nečas, David; Zajíčková, Lenka

    2016-10-01

    We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran-Blaha modified Becke-Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe-Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO2 was obtained only in the latter case.

  6. Cluster abundance in chameleon $f(R)$ gravity I: toward an accurate halo mass function prediction

    CERN Document Server

    Cataneo, Matteo; Lombriser, Lucas; Li, Baojiu

    2016-01-01

    We refine the mass and environment dependent spherical collapse model of chameleon $f(R)$ gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution $N$-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the fractional enhancement of the $f(R)$ halo abundance with respect to that of General Relativity (GR) within a precision of $\\lesssim 5\\%$ from the results obtained in the simulations. Similar accuracy can be achieved for the full $f(R)$ mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexi...

  7. Accurate and Simplified Prediction of AVF for Delay and Energy Efficient Cache Design

    Institute of Scientific and Technical Information of China (English)

    An-Guo Ma; Yu Cheng; Zuo-Cheng Xing

    2011-01-01

    With continuous technology scaling, on-chip structures are becoming more and more susceptible to soft errors. Architectural vulnerability factor (AVF) has been introduced to quantify the architectural vulnerability of on-chip structures to soft errors. Recent studies have found that designing soft error protection techniques with the awareness of AVF is greatly helpful to achieve a tradeoff between performance and reliability for several structures (i.e., issue queue, reorder buffer). Cache is one of the most susceptible components to soft errors and is commonly protected with error correcting codes (ECC). However, protecting caches closer to the processor (i.e., L1 data cache (L1D)) using ECC could result in high overhead. Protecting caches without accurate knowledge of the vulnerability characteristics may lead to over-protection. Therefore, designing AVF-aware ECC is attractive for designers to balance among performance, power and reliability for cache, especially at early design stage. In this paper, we improve the methodology of cache AVF computation and develop a new AVF estimation framework, soft error reliability analysis based on SimpleScalar. Then we characterize dynamic vulnerability behavior of L1D and detect the correlations between LID AVF and various performance metrics. We propose to employ Bayesian additive regression trees to accurately model the variation of L1D AVF and to quantitatively explain the important effects of several key performance metrics on L1D AVF. Then, we employ bump hunting technique to reduce the complexity of L1D AVF prediction and extract some simple selecting rules based on several key performance metrics, thus enabling a simplified and fast estimation of L1D AVF. Based on the simplified and fast estimation of L1D AVF, intervals of high L1D AVF can be identified online, enabling us to develop the AVF-aware ECC technique to reduce the overhead of ECC. Experimental results show that compared with traditional ECC technique

  8. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  9. Accurate wavelength prediction of photonic crystal resonant reflection and applications in refractive index measurement

    DEFF Research Database (Denmark)

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron L. C.

    2014-01-01

    and superstrate materials. The importance of accounting for material dispersion in order to obtain accurate simulation results is highlighted, and a method for doing so using an iterative approach is demonstrated. Furthermore, an application for the model is demonstrated, in which the material dispersion...... of a liquid is extracted from measured resonance wavelengths....

  10. Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations

    Science.gov (United States)

    2011-09-30

    absorption , scatter, and backscatter coefficients) effects. However, once an accurate value of the scalar irradiance Eo(z,λ) has been computed to... photosynthesis . It is possible to compute PAR to the bottom of the euphotic zone in a fraction of a second of computer time, with errors of no more than a few

  11. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    Directory of Open Access Journals (Sweden)

    Tatjana Braun

    2015-12-01

    Full Text Available Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  12. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    Science.gov (United States)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  13. Accurate Prediction of the Ammonia Probes of a Variable Proton-to-Electron Mass Ratio

    CERN Document Server

    Owens, Alec; Thiel, Walter; Špirko, Vladimir

    2015-01-01

    A comprehensive study of the mass sensitivity of the vibration-rotation-inversion transitions of $^{14}$NH$_3$, $^{15}$NH$_3$, $^{14}$ND$_3$, and $^{15}$ND$_3$ is carried out variationally using the TROVE approach. Variational calculations are robust and accurate, offering a new way to compute sensitivity coefficients. Particular attention is paid to the $\\Delta k=\\pm 3$ transitions between the accidentally coinciding rotation-inversion energy levels of the $\

  14. Enabling Computational Technologies for the Accurate Prediction/Description of Molecular Interactions in Condensed Phases

    Science.gov (United States)

    2014-10-08

    models to compute accurately the molecular interactions between a mobile or stationary phase and a target substrate or analyte , which are fundamental...mobile or stationary phase and a target substrate or analyte , which are fundamental to diverse technologies, e.g., sensor or separation design. With...D. G., New Orleans, LA, April 9, 2013. 223rd Electrochemical Society Meeting, Continuum Solvation Models for Computational Electrochemistry

  15. Accurate microRNA target prediction correlates with protein repression levels

    Directory of Open Access Journals (Sweden)

    Simossis Victor A

    2009-09-01

    Full Text Available Abstract Background MicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease. Results DIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction. Conclusion Recently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at http://www.microrna.gr/microT

  16. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    Directory of Open Access Journals (Sweden)

    Jesse S. Jin

    2010-10-01

    Full Text Available Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  17. Sensor data fusion for accurate cloud presence prediction using Dempster-Shafer evidence theory.

    Science.gov (United States)

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  18. Toward accurate prediction of pKa values for internal protein residues: the importance of conformational relaxation and desolvation energy.

    Science.gov (United States)

    Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K

    2011-12-01

    Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy.

  19. Hybrid exchange-correlation functional for accurate prediction of the electronic and structural properties of ferroelectric oxides

    OpenAIRE

    D., I. Bilc; R., Orlando; R., Shaltaf; G., M. Rignanese; J., Íñiguez; Ph., Ghosez

    2008-01-01

    Using a linear combination of atomic orbitals approach, we report a systematic comparison of various Density Functional Theory (DFT) and hybrid exchange-correlation functionals for the prediction of the electronic and structural properties of prototypical ferroelectric oxides. It is found that none of the available functionals is able to provide, at the same time, accurate electronic and structural properties of the cubic and tetragonal phases of BaTiO$_3$ and PbTiO$_3$. Some, although not al...

  20. An endometrial gene expression signature accurately predicts recurrent implantation failure after IVF

    Science.gov (United States)

    Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.

    2016-01-01

    The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113

  1. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    Science.gov (United States)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  2. Achieving accurate and efficient prediction of HVAC diaphragm noise at realistic Reynolds and Mach numbers

    NARCIS (Netherlands)

    Guilloud, G.; Schram, C.; Golliard, J.

    2009-01-01

    Despite the aeroacoustic expertise reached nowadays in air and ground transportation, energy sector or domestic appliances, reaching a decibel accuracy of an acoustic prediction for industrial cases is still challenging. Strong investments are made nowadays by oil and gas companies to determine and

  3. Safe surgery: how accurate are we at predicting intra-operative blood loss?

    LENUS (Irish Health Repository)

    2012-02-01

    Introduction Preoperative estimation of intra-operative blood loss by both anaesthetist and operating surgeon is a criterion of the World Health Organization\\'s surgical safety checklist. The checklist requires specific preoperative planning when anticipated blood loss is greater than 500 mL. The aim of this study was to assess the accuracy of surgeons and anaesthetists at predicting intra-operative blood loss. Methods A 6-week prospective study of intermediate and major operations in an academic medical centre was performed. An independent observer interviewed surgical and anaesthetic consultants and registrars, preoperatively asking each to predict expected blood loss in millilitre. Intra-operative blood loss was measured and compared with these predictions. Parameters including the use of anticoagulation and anti-platelet therapy as well as intra-operative hypothermia and hypotension were recorded. Results One hundred sixty-eight operations were included in the study, including 142 elective and 26 emergency operations. Blood loss was predicted to within 500 mL of measured blood loss in 89% of cases. Consultant surgeons tended to underestimate blood loss, doing so in 43% of all cases, while consultant anaesthetists were more likely to overestimate (60% of all operations). Twelve patients (7%) had underestimation of blood loss of more than 500 mL by both surgeon and anaesthetist. Thirty per cent (n = 6\\/20) of patients requiring transfusion of a blood product within 24 hours of surgery had blood loss underestimated by more than 500 mL by both surgeon and anaesthetist. There was no significant difference in prediction between patients on anti-platelet or anticoagulation therapy preoperatively and those not on the said therapies. Conclusion Predicted intra-operative blood loss was within 500 mL of measured blood loss in 89% of operations. In 30% of patients who ultimately receive a blood transfusion, both the surgeon and anaesthetist significantly underestimate

  4. Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising

    CERN Document Server

    Donoho, David; Montanari, Andrea

    2011-01-01

    Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula precisely delineating the allowable degree of of undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar shrinkers (soft thresholding, positive soft thresholding and capping). This paper gives several examples including scalar shrinkers not derivable from convex optimization -- the firm shrinkage nonlinearity and the minimax} nonlinearity -- and also nonscalar denoisers -- block thresholding (both block soft and block James-Stein), monotone regression, and total variation minimization. Let the variables \\epsilon = k/N and \\delta = n/N denote the generalized sparsity and undersampling fractions for sampling the k-gene...

  5. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    Directory of Open Access Journals (Sweden)

    Yasser El-Manzalawy

    Full Text Available A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles. Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein

  6. Accurate single-sequence prediction of solvent accessible surface area using local and global features.

    Science.gov (United States)

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-11-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org.

  7. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    Energy Technology Data Exchange (ETDEWEB)

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  8. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    Science.gov (United States)

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  9. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    Directory of Open Access Journals (Sweden)

    Josué Pagán

    2015-06-01

    Full Text Available Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN. The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID that are capable of providing average forecast windows of 47 min and a low rate of false positives.

  10. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    Science.gov (United States)

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.

  11. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Science.gov (United States)

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.

  12. Does preoperative cross-sectional imaging accurately predict main duct involvement in intraductal papillary mucinous neoplasm?

    Science.gov (United States)

    Barron, M R; Roch, A M; Waters, J A; Parikh, J A; DeWitt, J M; Al-Haddad, M A; Ceppa, E P; House, M G; Zyromski, N J; Nakeeb, A; Pitt, H A; Schmidt, C Max

    2014-03-01

    Main pancreatic duct (MPD) involvement is a well-demonstrated risk factor for malignancy in intraductal papillary mucinous neoplasm (IPMN). Preoperative radiographic determination of IPMN type is heavily relied upon in oncologic risk stratification. We hypothesized that radiographic assessment of MPD involvement in IPMN is an accurate predictor of pathological MPD involvement. Data regarding all patients undergoing resection for IPMN at a single academic institution between 1992 and 2012 were gathered prospectively. Retrospective analysis of imaging and pathologic data was undertaken. Preoperative classification of IPMN type was based on cross-sectional imaging (MRI/magnetic resonance cholangiopancreatography (MRCP) and/or CT). Three hundred sixty-two patients underwent resection for IPMN. Of these, 334 had complete data for analysis. Of 164 suspected branch duct (BD) IPMN, 34 (20.7%) demonstrated MPD involvement on final pathology. Of 170 patients with suspicion of MPD involvement, 50 (29.4%) demonstrated no MPD involvement. Of 34 patients with suspected BD-IPMN who were found to have MPD involvement on pathology, 10 (29.4%) had invasive carcinoma. Alternatively, 2/50 (4%) of the patients with suspected MPD involvement who ultimately had isolated BD-IPMN demonstrated invasive carcinoma. Preoperative radiographic IPMN type did not correlate with final pathology in 25% of the patients. In addition, risk of invasive carcinoma correlates with pathologic presence of MPD involvement.

  13. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach.

    Directory of Open Access Journals (Sweden)

    Zhiheng Wang

    Full Text Available The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database.The DisoMCS is available at http://cal.tongji.edu.cn/disorder/.

  14. Computational methods toward accurate RNA structure prediction using coarse-grained and all-atom models.

    Science.gov (United States)

    Krokhotin, Andrey; Dokholyan, Nikolay V

    2015-01-01

    Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.

  15. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences

    KAUST Repository

    Chen, Peng

    2013-07-23

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.

  16. How Five Student Characteristics Accurately Predict For-Profit University Graduation Odds

    Directory of Open Access Journals (Sweden)

    Tim Gramling

    2013-07-01

    Full Text Available President Obama’s goal is for America to lead the world in college graduates by 2020. Although for-profit institutions have increased their output of graduates at ten times the rate of nonprofits over the past decade, Congress and the U.S. Department of Education have argued that these institutions exploit the ambitions of lower-performing students. In response, this study examined how student characteristics predicted graduation odds at a large, regionally accredited for-profit institution campus. A logistic regression predicted graduation for the full population of 2,548 undergraduate students enrolled from 2005 to 2009 with scheduled graduation by June 30, 2011. Sixteen independent predictors were identified from school records and organized in the Bean and Metzner framework. The regression model was more robust than any in the literature, with a Nagelkerke R2 of .663. Only five factors had a significant impact on log odds: (a grade point average (GPA, where higher values increased odds; (b half time enrollment, which had lower odds than full time; (c Blacks, who had higher odds than Whites; (d credits required, where fewer credits increased odds; and (e primary expected family contribution, where higher values increased odds. These findings imply that public policy will not increase college graduates by focusing on institution characteristics.

  17. Neural network and SVM classifiers accurately predict lipid binding proteins, irrespective of sequence homology.

    Science.gov (United States)

    Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil

    2014-09-07

    Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods.

  18. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  19. Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.

    Science.gov (United States)

    Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej

    2017-01-01

    A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .

  20. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  1. nuMap:A Web Platform for Accurate Prediction of Nucleosome Positioning

    Institute of Scientific and Technical Information of China (English)

    Bader A Alharbi; Thamir H Alshammari; Nathan L Felton; Victor B Zhurkin; Feng Cui

    2014-01-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and param-eters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.

  2. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    Science.gov (United States)

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in 3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.

  3. Fast and accurate prediction for aerodynamic forces and moments acting on satellites flying in Low-Earth Orbit

    Science.gov (United States)

    Jin, Xuhon; Huang, Fei; Hu, Pengju; Cheng, Xiaoli

    2016-11-01

    A fundamental prerequisite for satellites operating in a Low Earth Orbit (LEO) is the availability of fast and accurate prediction of non-gravitational aerodynamic forces, which is characterised by the free molecular flow regime. However, conventional computational methods like the analytical integral method and direct simulation Monte Carlo (DSMC) technique are found failing to deal with flow shadowing and multiple reflections or computationally expensive. This work develops a general computer program for the accurate calculation of aerodynamic forces in the free molecular flow regime using the test particle Monte Carlo (TPMC) method, and non-gravitational aerodynamic forces actiong on the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite is calculated for different freestream conditions and gas-surface interaction models by the computer program.

  4. Non-Empirically Tuned Range-Separated DFT Accurately Predicts Both Fundamental and Excitation Gaps in DNA and RNA Nucleobases

    CERN Document Server

    Foster, Michael E; 10.1021/ct300420f

    2012-01-01

    Using a non-empirically tuned range-separated DFT approach, we study both the quasiparticle properties (HOMO-LUMO fundamental gaps) and excitation energies of DNA and RNA nucleobases (adenine, thymine, cytosine, guanine, and uracil). Our calculations demonstrate that a physically-motivated, first-principles tuned DFT approach accurately reproduces results from both experimental benchmarks and more computationally intensive techniques such as many-body GW theory. Furthermore, in the same set of nucleobases, we show that the non-empirical range-separated procedure also leads to significantly improved results for excitation energies compared to conventional DFT methods. The present results emphasize the importance of a non-empirically tuned range-separation approach for accurately predicting both fundamental and excitation gaps in DNA and RNA nucleobases.

  5. Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions

    Science.gov (United States)

    Lachut, M.; Bennett, J.

    2016-09-01

    The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.

  6. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Science.gov (United States)

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.

  7. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  8. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  9. Accurate prediction of a minimal region around a genetic association signal that contains the causal variant.

    Science.gov (United States)

    Bochdanovits, Zoltán; Simón-Sánchez, Javier; Jonker, Marianne; Hoogendijk, Witte J; van der Vaart, Aad; Heutink, Peter

    2014-02-01

    In recent years, genome-wide association studies have been very successful in identifying loci for complex traits. However, typically these findings involve noncoding and/or intergenic SNPs without a clear functional effect that do not directly point to a gene. Hence, the challenge is to identify the causal variant responsible for the association signal. Typically, the first step is to identify all genetic variation in the locus region, usually by resequencing a large number of case chromosomes. Among all variants, the causal one needs to be identified in further functional studies. Because the experimental follow up can be very laborious, restricting the number of variants to be scrutinized can yield a great advantage. An objective method for choosing the size of the region to be followed up would be highly valuable. Here, we propose a simple method to call the minimal region around a significant association peak that is very likely to contain the causal variant. We model linkage disequilibrium (LD) in cases from the observed single SNP association signals, and predict the location of the causal variant by quantifying how well this relationship fits the data. Simulations showed that our approach identifies genomic regions of on average ∼50 kb with up to 90% probability to contain the causal variant. We apply our method to two genome-wide association data sets and localize both the functional variant REP1 in the α-synuclein gene that conveys susceptibility to Parkinson's disease and the APOE gene responsible for the association signal in the Alzheimer's disease data set.

  10. How Accurate Are the Anthropometry Equations in in Iranian Military Men in Predicting Body Composition?

    Science.gov (United States)

    Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza

    2015-01-01

    Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964

  11. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  12. Easy-to-use, general, and accurate multi-Kinect calibration and its application to gait monitoring for fall prediction.

    Science.gov (United States)

    Staranowicz, Aaron N; Ray, Christopher; Mariottini, Gian-Luca

    2015-01-01

    Falls are the most-common causes of unintentional injury and death in older adults. Many clinics, hospitals, and health-care providers are urgently seeking accurate, low-cost, and easy-to-use technology to predict falls before they happen, e.g., by monitoring the human walking pattern (or "gait"). Despite the wide popularity of Microsoft's Kinect and the plethora of solutions for gait monitoring, no strategy has been proposed to date to allow non-expert users to calibrate the cameras, which is essential to accurately fuse the body motion observed by each camera in a single frame of reference. In this paper, we present a novel multi-Kinect calibration algorithm that has advanced features when compared to existing methods: 1) is easy to use, 2) it can be used in any generic Kinect arrangement, and 3) it provides accurate calibration. Extensive real-world experiments have been conducted to validate our algorithm and to compare its performance against other multi-Kinect calibration approaches, especially to show the improved estimate of gait parameters. Finally, a MATLAB Toolbox has been made publicly available for the entire research community.

  13. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race.

    Science.gov (United States)

    Hourihan, Kathleen L; Benjamin, Aaron S; Liu, Xiping

    2012-09-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness's claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness.

  14. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging

    Science.gov (United States)

    Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.

    2015-02-01

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.

  15. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    Science.gov (United States)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David

    2017-03-01

    Constrained-occupancy delta-self-consistent-field (Δ SCF ) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1 s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The Δ SCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle Δ SCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  16. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  17. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-08-26

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  18. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    Science.gov (United States)

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  19. A composite score combining waist circumference and body mass index more accurately predicts body fat percentage in 6-to 13-year-old children

    NARCIS (Netherlands)

    Aeberli, I.; Gut-Knabenhans, M.; Kusche-Ammann, R.S.; Molinari, L.; Zimmermann, M.B.

    2013-01-01

    Body mass index (BMI) and waist circumference (WC) are widely used to predict % body fat (BF) and classify degrees of pediatric adiposity. However, both measures have limitations. The aim of this study was to evaluate whether a combination of WC and BMI would more accurately predict %BF than either

  20. High IFIT1 expression predicts improved clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

    Science.gov (United States)

    Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong

    2016-06-01

    Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

  1. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    Science.gov (United States)

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  2. Formation of NO from N2/O2 mixtures in a flow reactor: Toward an accurate prediction of thermal NO

    DEFF Research Database (Denmark)

    Abian, Maria; Alzueta, Maria U.; Glarborg, Peter

    2015-01-01

    We have conducted flow reactor experiments for NO formation from N2/O2 mixtures at high temperatures and atmospheric pressure, controlling accurately temperature and reaction time. Under these conditions, atomic oxygen equilibrates rapidly with O2. The experimental results were interpreted......, is recommended for use in kinetic modeling....

  3. Accurate spike time prediction from LFP in monkey visual cortex: A non-linear system identification approach

    NARCIS (Netherlands)

    Kostoglou, K.; Hadjipapas, A.; Lowet, E.; Roberts, M.; de Weerd, P.; Mitsis, G.D.

    2014-01-01

    Aims: The relationship between collective population activity (LFP) and spikes underpins network computation, yet it remains poorly understood. Previous studies utilized pre-defined LFP features to predict spiking from simultaneously recorded LFP, and have reported good prediction of spike bursts bu

  4. Random forest algorithm yields accurate quantitative prediction models of benthic light at intertidal sites affected by toxic Lyngbya majuscula blooms

    NARCIS (Netherlands)

    Kehoe, M.J.; O’ Brien, K.; Grinham, A.; Rissik, D.; Ahern, K.S.; Maxwell, P.

    2012-01-01

    It is shown that targeted high frequency monitoring and modern machine learning methods lead to highly predictive models of benthic light flux. A state-of-the-art machine learning technique was used in conjunction with a high frequency data set to calibrate and test predictive benthic lights models

  5. Accurately Predicting the Density and Hydrostatic Compression of Hexahydro-1,3,5-Trinitro-1,3,5-Triazine from First Principles

    Institute of Scientific and Technical Information of China (English)

    SONG HuarJie; HUANG Feng-Lei

    2011-01-01

    @@ We predict the densities of crystalline hexahydro-1,3,5-trinitro-1,3,5-triazine(RDX)by introducing a factor of(1+1.5×10(-4)T)into the wavefunction-based potential of RDX constructed from first principles using the symmetry-adapted perturbation theory and the Williams-Stone-Misquitta method.The predicted values are within an accuracy of 1%of the density from O to 430K and closely reproduced the RDX densities under hydrostatic compression.This work heralds a promising approach to predicting accurately the densities of high explosives at temperatures and pressures to which they are often subjected,which is a long-standing issue in the field of energetic materials.%We predict the densities of crystalline hexahydro-l,3,5-trinitro-l,3,5-triazine (RDX) by introducing a factor of (1+1.5 x 10~* T) into the wavefunction-based potential of RDX constructed from first principles using the symmetry-adapted perturbation theory and the Williams-Stone-Misquitta method. The predicted values are within an accuracy of 1% of the density from 0 to 430 K and closely reproduced the RDX densities under hydrostatic compression. This work heralds a promising approach to predicting accurately the densities of high explosives at temperatures and pressures to which they are often subjected, which is a long-standing issue in the Beld of energetic materials.

  6. PSSP-RFE: accurate prediction of protein structural class by recursive feature extraction from PSI-BLAST profile, physical-chemical property and functional annotations.

    Directory of Open Access Journals (Sweden)

    Liqi Li

    Full Text Available Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM in conjunction with integrated features from position-specific score matrix (PSSM, PROFEAT and Gene Ontology (GO. A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.

  7. PSSP-RFE: accurate prediction of protein structural class by recursive feature extraction from PSI-BLAST profile, physical-chemical property and functional annotations.

    Science.gov (United States)

    Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.

  8. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  9. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    Science.gov (United States)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement

  10. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages

    NARCIS (Netherlands)

    Vlaming, M.L.; Duijn, E. van; Dillingh, M.R.; Brands, R.; Windhorst, A.D.; Hendrikse, N.H.; Bosgra, S.; Burggraaf, J.; Koning, M.C. de; Fidder, A.; Mocking, J.A.; Sandman, H.; Ligt, R.A. de; Fabriek, B.O.; Pasman, W.J.; Seinen, W.; Alves, T.; Carrondo, M.; Peixoto, C.; Peeters, P.A.; Vaes, W.H.

    2015-01-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBE

  11. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems.

    Directory of Open Access Journals (Sweden)

    Ram Samudrala

    2009-04-01

    Full Text Available The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates--effector proteins--are not. We have used a novel computational approach to confidently identify new secreted effectors by integrating protein sequence-based features, including evolutionary measures such as the pattern of homologs in a range of other organisms, G+C content, amino acid composition, and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from the plant pathogen Pseudomonas syringae and validated on a set of effectors from the animal pathogen Salmonella enterica serovar Typhimurium (S. Typhimurium after eliminating effectors with detectable sequence similarity. We show that this approach can predict known secreted effectors with high specificity and sensitivity. Furthermore, by considering a large set of effectors from multiple organisms, we computationally identify a common putative secretion signal in the N-terminal 20 residues of secreted effectors. This signal can be used to discriminate 46 out of 68 total known effectors from both organisms, suggesting that it is a real, shared signal applicable to many type III secreted effectors. We use the method to make novel predictions of secreted effectors in S. Typhimurium, some of which have been experimentally validated. We also apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis, identifying the majority of known secreted proteins in addition to providing a number of novel predictions. This approach provides a new way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  12. Ability of Functional Independence Measure to accurately predict functional outcome of stroke-specific population: Systematic review

    OpenAIRE

    Madeleine Spencer, DPT, PT; Karen Skop, DPT, PT; Kristina Shesko, DPT, PT; Kristen Nollinger, DPT, PT; Douglas Chumney, DPT, PT; Roberta A. Newton, PT, PhD

    2010-01-01

    Stroke is a leading cause of functional impairments. The ability to quantify the functional ability of poststroke patients engaged in a rehabilitation program may assist in prediction of their functional outcome. The Functional Independence Measure (FIM) is widely used and accepted as a functional-level assessment tool that evaluates the functional status of patients throughout the rehabilitation process. From February to March 2009, we searched MEDLINE, Ovid, CINAHL, and EBSCO for full-text ...

  13. Accurate microRNA target prediction using detailed binding site accessibility and machine learning on proteomics data

    Directory of Open Access Journals (Sweden)

    Martin eReczko

    2012-01-01

    Full Text Available MicroRNAs (miRNAs are a class of small regulatory genes regulating gene expression by targetingmessenger RNA. Though computational methods for miRNA target prediction are the prevailingmeans to analyze their function, they still miss a large fraction of the targeted genes and additionallypredict a large number of false positives. Here we introduce a novel algorithm called DIANAmicroT-ANN which combines multiple novel target site features through an artificial neural network(ANN and is trained using recently published high-throughput data measuring the change of proteinlevels after miRNA overexpression, providing positive and negative targeting examples. The featurescharacterizing each miRNA recognition element include binding structure, conservation level and aspecific profile of structural accessibility. The ANN is trained to integrate the features of eachrecognition element along the 3’ untranslated region into a targeting score, reproducing the relativerepression fold change of the protein. Tested on two different sets the algorithm outperforms otherwidely used algorithms and also predicts a significant number of unique and reliable targets notpredicted by the other methods. For 542 human miRNAs DIANA-microT-ANN predicts 120,000targets not provided by TargetScan 5.0. The algorithm is freely available athttp://microrna.gr/microT-ANN.

  14. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    Science.gov (United States)

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing.

  15. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    Science.gov (United States)

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  16. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    Directory of Open Access Journals (Sweden)

    Ting Wang

    Full Text Available Massively parallel sequencing (MPS combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs by sequencing cell-free fetal DNA (cffDNA from maternal plasma, so-called non-invasive prenatal testing (NIPT. However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR and false positive rate (FPR in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1% in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples, suggesting that it is reliable and robust enough for clinical testing.

  17. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing

    Science.gov (United States)

    Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  18. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Directory of Open Access Journals (Sweden)

    Carlo Baldassi

    Full Text Available In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids, exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i the prediction of residue-residue contacts in proteins, and (ii the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  19. From dimer to condensed phases at extreme conditions: accurate predictions of the properties of water by a Gaussian charge polarizable model.

    Science.gov (United States)

    Paricaud, Patrice; Predota, Milan; Chialvo, Ariel A; Cummings, Peter T

    2005-06-22

    Water exhibits many unusual properties that are essential for the existence of life. Water completely changes its character from ambient to supercritical conditions in a way that makes it possible to sustain life at extreme conditions, leading to conjectures that life may have originated in deep-sea vents. Molecular simulation can be very useful in exploring biological and chemical systems, particularly at extreme conditions for which experiments are either difficult or impossible; however this scenario entails an accurate molecular model for water applicable over a wide range of state conditions. Here, we present a Gaussian charge polarizable model (GCPM) based on the model developed earlier by Chialvo and Cummings [Fluid Phase Equilib. 150, 73 (1998)] which is, to our knowledge, the first that satisfies the water monomer and dimer properties, and simultaneously yields very accurate predictions of dielectric, structural, vapor-liquid equilibria, and transport properties, over the entire fluid range. This model would be appropriate for simulating biological and chemical systems at both ambient and extreme conditions. The particularity of the GCPM model is the use of Gaussian distributions instead of points to represent the partial charges on the water molecules. These charge distributions combined with a dipole polarizability and a Buckingham exp-6 potential are found to play a crucial role for the successful and simultaneous predictions of a variety of water properties. This work not only aims at presenting an accurate model for water, but also at proposing strategies to develop classical accurate models for the predictions of structural, dynamic, and thermodynamic properties.

  20. Dual X-ray absorptiometry accurately predicts carcass composition from live sheep and chemical composition of live and dead sheep.

    Science.gov (United States)

    Pearce, K L; Ferguson, M; Gardner, G; Smith, N; Greef, J; Pethick, D W

    2009-01-01

    Fifty merino wethers (liveweight range from 44 to 81kg, average of 58.6kg) were lot fed for 42d and scanned through a dual X-ray absorptiometry (DXA) as both a live animal and whole carcass (carcass weight range from 15 to 32kg, average of 22.9kg) producing measures of total tissue, lean, fat and bone content. The carcasses were subsequently boned out into saleable cuts and the weights and yield of boned out muscle, fat and bone recorded. The relationship between chemical lean (protein+water) was highly correlated with DXA carcass lean (r(2)=0.90, RSD=0.674kg) and moderately with DXA live lean (r(2)=0.72, RSD=1.05kg). The relationship between the chemical fat was moderately correlated with DXA carcass fat (r(2)=0.86, RSD=0.42kg) and DXA live fat (r(2)=0.70, RSD=0.71kg). DXA carcass and live animal bone was not well correlated with chemical ash (both r(2)=0.38, RSD=0.3). DXA carcass lean was moderately well predicted from DXA live lean with the inclusion of bodyweight in the regression (r(2)=0.82, RSD=0.87kg). DXA carcass fat was well predicted from DXA live fat (r(2)=0.86, RSD=0.54kg). DXA carcass lean and DXA carcass fat with the inclusion of carcass weight in the regression significantly predicted boned out muscle (r(2)=0.97, RSD=0.32kg) and fat weight, respectively (r(2)=0.92, RSD=0.34kg). The use of DXA live lean and DXA live fat with the inclusion of bodyweight to predict boned out muscle (r(2)=0.83, RSD=0.75kg) and fat (r(2)=0.86, RSD=0.46kg) weight, respectively, was moderate. The use of DXA carcass and live lean and fat to predict boned out muscle and fat yield was not correlated as weight. The future for the DXA will exist in the determination of body composition in live animals and carcasses in research experiments but there is potential for the DXA to be used as an online carcass grading system.

  1. Accurate prediction of protein structural classes by incorporating predicted secondary structure information into the general form of Chou's pseudo amino acid composition.

    Science.gov (United States)

    Kong, Liang; Zhang, Lichao; Lv, Jinfeng

    2014-03-07

    Extracting good representation from protein sequence is fundamental for protein structural classes prediction tasks. In this paper, we propose a novel and powerful method to predict protein structural classes based on the predicted secondary structure information. At the feature extraction stage, a 13-dimensional feature vector is extracted to characterize general contents and spatial arrangements of the secondary structural elements of a given protein sequence. Specially, four segment-level features are designed to elevate discriminative ability for proteins from the α/β and α+β classes. After the features are extracted, a multi-class non-linear support vector machine classifier is used to implement protein structural classes prediction. We report extensive experiments comparing the proposed method to the state-of-the-art in protein structural classes prediction on three widely used low-similarity benchmark datasets: FC699, 1189 and 640. Our method achieves competitive performance on prediction accuracies, especially for the overall prediction accuracies which have exceeded the best reported results on all of the three datasets.

  2. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    Science.gov (United States)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  3. Women's age and embryo developmental speed accurately predict clinical pregnancy after single vitrified-warmed blastocyst transfer.

    Science.gov (United States)

    Kato, Keiichi; Ueno, Satoshi; Yabuuchi, Akiko; Uchiyama, Kazuo; Okuno, Takashi; Kobayashi, Tamotsu; Segawa, Tomoya; Teramoto, Shokichi

    2014-10-01

    The aim of this study was to establish a simple, objective blastocyst grading system using women's age and embryo developmental speed to predict clinical pregnancy after single vitrified-warmed blastocyst transfer. A 6-year retrospective cohort study was conducted in a private infertility centre. A total of 7341 single vitrified-armed blastocyst transfer cycles were included, divided into those carried out between 2006 and 2011 (6046 cycles) and 2012 (1295 cycles). Clinical pregnancy rate, ongoing pregnancy rate and delivery rates were stratified by women's age (149 h) as embryo developmental speed. In all the age groups, clinical pregnancy rate, ongoing pregnancy rate and delivery rates decreased as the embryo developmental speed decreased (P pregnancy rates observed in the 2006-2011 cohort. Subsequently, the novel grading score was validated in the 2012 cohort (1295 cycles), finding an excellent association. In conclusion, we established a novel blastocyst grading system using women's age and embryo developmental speed as objective parameters.

  4. Predictive value of multi-detector computed tomography for accurate diagnosis of serous cystadenoma: Radiologic-pathologic correlation

    Institute of Scientific and Technical Information of China (English)

    Anjuli A Shah; Nisha I Sainani; Avinash Kambadakone Ramesh; Zarine K Shah; Vikram Deshpande; Peter F Hahn; Dushyant V Sahani

    2009-01-01

    AIM:To identify multi-detector computed tomography (MDCT) features mos t predi c t i ve of serous cystadenomas (SCAs),correlating with histopathology,and to study the impact of cyst size and MDCT technique on reader performance.METHODS:The MDCT scans of 164 patients with surgically verified pancreatic cystic lesions were reviewed by two readers to study the predictive value of various morphological features for establishing a diagnosis of SCAs.Accuracy in lesion characterization and reader confidence were correlated with lesion size (≤3 cm or ≥3 cm) and scanning protocols (dedicated vs routine).RESULTS:28/164 cysts (mean size,39 mm;range,8-92 mm) were diagnosed as SCA on pathology.The MDCT features predictive of diagnosis of SCA were microcystic appearance (22/28,78.6%),surface lobulations (25/28,89.3%) and central scar (9/28,32.4%).Stepwise logistic regression analysis showed that only microcystic appearance was significant for CT diagnosis of SCA (P=0.0001).The sensitivity,specificity and PPV of central scar and of combined microcystic appearance and lobulations were 32.4%/100%/100% and 68%/100%/100%,respectively.The reader confidence was higher for lesions>3 cm (P=0.02) and for MDCT scans performed using thin collimation (1.25-2.5 mm) compared to routine 5 mm collimation exams (P>0.05).CONCLUSION:Central scar on MDCT is diagnostic of SCA but is seen in only one third of SCAs.Microcystic morphology is the most significant CT feature in diagnosis of SCA.A combination of microcystic appearance and surface lobulations offers accuracy comparable to central scar with higher sensitivity.

  5. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    Energy Technology Data Exchange (ETDEWEB)

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  6. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    Science.gov (United States)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  7. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    Science.gov (United States)

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  8. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages.

    Science.gov (United States)

    Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J

    2015-08-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs.

  9. A new accurate ground-state potential energy surface of ethylene and predictions for rotational and vibrational energy levels

    Energy Technology Data Exchange (ETDEWEB)

    Delahaye, Thibault, E-mail: thibault.delahaye@univ-reims.fr; Rey, Michaël, E-mail: michael.rey@univ-reims.fr; Tyuterev, Vladimir G. [Groupe de Spectrométrie Moléculaire et Atmosphérique, UMR CNRS 7331, BP 1039, F-51687, Reims Cedex 2 (France); Nikitin, Andrei [Laboratory of Theoretical Spectroscopy, Institute of Atmospheric Optics, Russian Academy of Sciences, 634055 Tomsk, Russia and Quamer, State University of Tomsk (Russian Federation); Szalay, Péter G. [Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2014-09-14

    In this paper we report a new ground state potential energy surface for ethylene (ethene) C{sub 2}H{sub 4} obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C{sub 2}H{sub 4} molecule was obtained with a RMS(Obs.–Calc.) deviation of 2.7 cm{sup −1} for fundamental bands centers and 5.9 cm{sup −1} for vibrational bands up to 7800 cm{sup −1}. Large scale vibrational and rotational calculations for {sup 12}C{sub 2}H{sub 4}, {sup 13}C{sub 2}H{sub 4}, and {sup 12}C{sub 2}D{sub 4} isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm{sup −1} are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of {sup 13}C{sub 2}H{sub 4} and {sup 12}C{sub 2}D{sub 4} and rovibrational levels of {sup 12}C{sub 2}H{sub 4}.

  10. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    Science.gov (United States)

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  11. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  12. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  13. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  14. Profile-QSAR: a novel meta-QSAR method that combines activities across the kinase family to accurately predict affinity, selectivity, and cellular activity.

    Science.gov (United States)

    Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna

    2011-08-22

    Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be

  15. Multireference correlation consistent composite approach [MR-ccCA]: toward accurate prediction of the energetics of excited and transition state chemistry.

    Science.gov (United States)

    Oyedepo, Gbenga A; Wilson, Angela K

    2010-08-26

    The correlation consistent Composite Approach, ccCA [ Deyonker , N. J. ; Cundari , T. R. ; Wilson , A. K. J. Chem. Phys. 2006 , 124 , 114104 ] has been demonstrated to predict accurate thermochemical properties of chemical species that can be described by a single configurational reference state, and at reduced computational cost, as compared with ab initio methods such as CCSD(T) used in combination with large basis sets. We have developed three variants of a multireference equivalent of this successful theoretical model. The method, called the multireference correlation consistent composite approach (MR-ccCA), is designed to predict the thermochemical properties of reactive intermediates, excited state species, and transition states to within chemical accuracy (e.g., 1 kcal/mol for enthalpies of formation) of reliable experimental values. In this study, we have demonstrated the utility of MR-ccCA: (1) in the determination of the adiabatic singlet-triplet energy separations and enthalpies of formation for the ground states for a set of diradicals and unsaturated compounds, and (2) in the prediction of energetic barriers to internal rotation, in ethylene and its heavier congener, disilene. Additionally, we have utilized MR-ccCA to predict the enthalpies of formation of the low-lying excited states of all the species considered. MR-ccCA is shown to give quantitative results without reliance upon empirically derived parameters, making it suitable for application to study novel chemical systems with significant nondynamical correlation effects.

  16. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Science.gov (United States)

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  17. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Directory of Open Access Journals (Sweden)

    Sheila M Reynolds

    Full Text Available DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the

  18. Metabolite signal identification in accurate mass metabolomics data with MZedDB, an interactive m/z annotation tool utilising predicted ionisation behaviour 'rules'

    Directory of Open Access Journals (Sweden)

    Snowdon Stuart

    2009-07-01

    Full Text Available Abstract Background Metabolomics experiments using Mass Spectrometry (MS technology measure the mass to charge ratio (m/z and intensity of ionised molecules in crude extracts of complex biological samples to generate high dimensional metabolite 'fingerprint' or metabolite 'profile' data. High resolution MS instruments perform routinely with a mass accuracy of Results Metabolite 'structures' harvested from publicly accessible databases were converted into a common format to generate a comprehensive archive in MZedDB. 'Rules' were derived from chemical information that allowed MZedDB to generate a list of adducts and neutral loss fragments putatively able to form for each structure and calculate, on the fly, the exact molecular weight of every potential ionisation product to provide targets for annotation searches based on accurate mass. We demonstrate that data matrices representing populations of ionisation products generated from different biological matrices contain a large proportion (sometimes > 50% of molecular isotopes, salt adducts and neutral loss fragments. Correlation analysis of ESI-MS data features confirmed the predicted relationships of m/z signals. An integrated isotope enumerator in MZedDB allowed verification of exact isotopic pattern distributions to corroborate experimental data. Conclusion We conclude that although ultra-high accurate mass instruments provide major insight into the chemical diversity of biological extracts, the facile annotation of a large proportion of signals is not possible by simple, automated query of current databases using computed molecular formulae. Parameterising MZedDB to take into account predicted ionisation behaviour and the biological source of any sample improves greatly both the frequency and accuracy of potential annotation 'hits' in ESI-MS data.

  19. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    Science.gov (United States)

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  20. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    Science.gov (United States)

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  1. Calibrating transition-metal energy levels and oxygen bands in first-principles calculations: Accurate prediction of redox potentials and charge transfer in lithium transition-metal oxides

    Science.gov (United States)

    Seo, Dong-Hwa; Urban, Alexander; Ceder, Gerbrand

    2015-09-01

    Transition-metal (TM) oxides play an increasingly important role in technology today, including applications such as catalysis, solar energy harvesting, and energy storage. In many of these applications, the details of their electronic structure near the Fermi level are critically important for their properties. We propose a first-principles-based computational methodology for the accurate prediction of oxygen charge transfer in TM oxides and lithium TM (Li-TM) oxides. To obtain accurate electronic structures, the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional is adopted, and the amount of exact Hartree-Fock exchange (mixing parameter) is adjusted to reproduce reference band gaps. We show that the HSE06 functional with optimal mixing parameter yields not only improved electronic densities of states, but also better energetics (Li-intercalation voltages) for LiCo O2 and LiNi O2 as compared to the generalized gradient approximation (GGA), Hubbard U corrected GGA (GGA +U ), and standard HSE06. We find that the optimal mixing parameters for TM oxides are system specific and correlate with the covalency (ionicity) of the TM species. The strong covalent (ionic) nature of TM-O bonding leads to lower (higher) optimal mixing parameters. We find that optimized HSE06 functionals predict stronger hybridization of the Co 3 d and O 2 p orbitals as compared to GGA, resulting in a greater contribution from oxygen states to charge compensation upon delithiation in LiCo O2 . We also find that the band gaps of Li-TM oxides increase linearly with the mixing parameter, enabling the straightforward determination of optimal mixing parameters based on GGA (α =0.0 ) and HSE06 (α =0.25 ) calculations. Our results also show that G0W0@GGA +U band gaps of TM oxides (M O ,M =Mn ,Co ,Ni ) and LiCo O2 agree well with experimental references, suggesting that G0W0 calculations can be used as a reference for the calibration of the mixing parameter in cases when no experimental band gap has been

  2. IMPre: an accurate and efficient software for prediction of T- and B-cell receptor germline genes and alleles from rearranged repertoire data

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2016-11-01

    Full Text Available Large-scale study of the properties of T-cell receptor (TCR and B-cell receptor (BCR repertoires through next-generation sequencing is providing excellent insights into the understanding of adaptive immune responses. Variable(DiversityJoining V(DJ germline genes and alleles must be characterized in detail to facilitate repertoire analyses. However, most species do not have well-characterized TCR/BCR germline genes because of their high homology. Also, more germline alleles are required for humans and other species, which limits the capacity for studying immune repertoires. Herein, we developed Immune Germline Prediction (IMPre, a tool for predicting germline V/J genes and alleles using deep-sequencing data derived from TCR/BCR repertoires. We developed a new algorithm, Seed_Clust, for clustering, produced a multiway tree for assembly and optimized the sequence according to the characteristics of rearrangement. We trained IMPre on human samples of T-cell receptor beta (TRB and immunoglobulin heavy chain (IGH, and then tested it on additional human samples. Accuracy of 97.7%, 100%, 92.9% and 100% was obtained for TRBV, TRBJ, IGHV and IGHJ, respectively. Analyses of subsampling performance for these samples showed IMPre to be robust using different data quantities. Subsequently, IMPre was tested on samples from rhesus monkeys and human long sequences: the highly accurate results demonstrated IMPre to be stable with animal and multiple data types. With rapid accumulation of high-throughput sequence data for TCR and BCR repertoires, IMPre can be applied broadly for obtaining novel genes and a large number of novel alleles. IMPre is available at https://github.com/zhangwei2015/IMPre.

  3. Highly accurate chemical formula prediction tool utilizing high-resolution mass spectra, MS/MS fragmentation, heuristic rules, and isotope pattern matching.

    Science.gov (United States)

    Pluskal, Tomáš; Uehara, Taisuke; Yanagida, Mitsuhiro

    2012-05-15

    Mass spectrometry is commonly applied to qualitatively and quantitatively profile small molecules, such as peptides, metabolites, or lipids. Modern mass spectrometers provide accurate measurements of mass-to-charge ratios of ions, with errors as low as 1 ppm. Even such high mass accuracy, however, is not sufficient to determine the unique chemical formula of each ion, and additional algorithms are necessary. Here we present a universal software tool for predicting chemical formulas from high-resolution mass spectrometry data, developed within the MZmine 2 framework. The tool is based on the use of a combination of heuristic techniques, including MS/MS fragmentation analysis and isotope pattern matching. The performance of the tool was evaluated using a real metabolomic data set obtained with the Orbitrap MS detector. The true formula was correctly determined as the highest-ranking candidate for 79% of the tested compounds. The novel isotope pattern-scoring algorithm outperformed a previously published method in 64% of the tested Orbitrap spectra. The software described in this manuscript is freely available and its source code can be accessed within the MZmine 2 source code repository.

  4. Defining the Most Accurate Measurable Dimension(s of the Liver in Predicting Liver Volume Based on CT Volumetery and Reconstruction

    Directory of Open Access Journals (Sweden)

    Reza Saadat Mostafavi

    2010-05-01

    Full Text Available Background/Objective: The presence of liver volume has a great effect on diagnosis and management of different diseases such as lymphoproliferative conditions. "nPatients and Methods: Abdominal CT scan of 100 patients without any findings for liver disease (in history and imaging was subjected to volumetry and reconstruction. Along with the liver volume, in axial series, the AP diameter of the left lobe (in midline and right lobe (mid-clavicular and lateral maximum diameter of the liver in the mid-axiliary line and maximum diameter to IVC were calculated. In the coronal mid-axillary and sagittal mid-clavicular plane, maximum superior-inferior dimensions were calculated with their various combinations (multiplying. Regression analysis between dimensions and volume were performed. "nResults: The most accurate combination was the superior inferior sagittal dimension multiplied by AP diameter of the right lobe (R squared 0.78, P-value<0.001 and the most solitary dimension was the lateral dimension to IVC in the axial plane (R squared 0.57, P-value<0.001 with an interval of 9-11cm for 68% of normal. "nConclusion: We recommend the lateral maximum diameter of liver from surface to IVC in the axial plane in ultrasound for liver volume prediction with an interval of 9-11cm for 68% of normal. Out of this range is regarded as abnormal.

  5. Accurate Prediction of Hyperfine Coupling Constants in Muoniated and Hydrogenated Ethyl Radicals: Ab Initio Path Integral Simulation Study with Density Functional Theory Method.

    Science.gov (United States)

    Yamada, Kenta; Kawashima, Yukio; Tachikawa, Masanori

    2014-05-13

    We performed ab initio path integral molecular dynamics (PIMD) simulations with a density functional theory (DFT) method to accurately predict hyperfine coupling constants (HFCCs) in the ethyl radical (CβH3-CαH2) and its Mu-substituted (muoniated) compound (CβH2Mu-CαH2). The substitution of a Mu atom, an ultralight isotope of the H atom, with larger nuclear quantum effect is expected to strongly affect the nature of the ethyl radical. The static conventional DFT calculations of CβH3-CαH2 find that the elongation of one Cβ-H bond causes a change in the shape of potential energy curve along the rotational angle via the imbalance of attractive and repulsive interactions between the methyl and methylene groups. Investigation of the methyl-group behavior including the nuclear quantum and thermal effects shows that an unbalanced CβH2Mu group with the elongated Cβ-Mu bond rotates around the Cβ-Cα bond in a muoniated ethyl radical, quite differently from the CβH3 group with the three equivalent Cβ-H bonds in the ethyl radical. These rotations couple with other molecular motions such as the methylene-group rocking motion (inversion), leading to difficulties in reproducing the corresponding barrier heights. Our PIMD simulations successfully predict the barrier heights to be close to the experimental values and provide a significant improvement in muon and proton HFCCs given by the static conventional DFT method. Further investigation reveals that the Cβ-Mu/H stretching motion, methyl-group rotation, methylene-group rocking motion, and HFCC values deeply intertwine with each other. Because these motions are different between the radicals, a proper description of the structural fluctuations reflecting the nuclear quantum and thermal effects is vital to evaluate HFCC values in theory to be comparable to the experimental ones. Accordingly, a fundamental difference in HFCC between the radicals arises from their intrinsic molecular motions at a finite temperature, in

  6. Formation of NO from N2/O2 mixtures in a flow reactor: Toward an accurate prediction of thermal NO

    DEFF Research Database (Denmark)

    Abian, Maria; Alzueta, Maria U.; Glarborg, Peter

    2015-01-01

    We have conducted flow reactor experiments for NO formation from N2/O2 mixtures at high temperatures and atmospheric pressure, controlling accurately temperature and reaction time. Under these conditions, atomic oxygen equilibrates rapidly with O2. The experimental results were interpreted by a d...

  7. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    Science.gov (United States)

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  8. Is scoring system of computed tomography based metric parameters can accurately predicts shock wave lithotripsy stone-free rates and aid in the development of treatment strategies?

    Directory of Open Access Journals (Sweden)

    Yasser ALI Badran

    2016-01-01

    Conclusion: Stone size, stone density (HU, and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies.

  9. The molecular genetics and morphometry-based Endometrial Intraepithelial Neoplasia classification system predicts disease progression in Endometrial hyperplasia more accurately than the 1994 World Health Organization classification system

    NARCIS (Netherlands)

    Baak, JP; Mutter, GL; Robboy, S; van Diest, PJ; Uyterlinde, AM; Orbo, A; Palazzo, J; Fiane, B; Lovslett, K; Burger, C; Voorhorst, F; Verheijen, RH

    2005-01-01

    BACKGROUND. The objective of this study was to compare the accuracy of disease progression prediction of the molecular genetics and morphometry-based Endometrial Intraepithelial Neoplasia (EIN) and World Health Organization 1994 (WHO94) classification systems in patients with endometrial hyperplasia

  10. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M

    2008-01-01

    been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75–80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non...

  11. Closed-loop spontaneous baroreflex transfer function is inappropriate for system identification of neural arc but partly accurate for peripheral arc: predictability analysis.

    Science.gov (United States)

    Kamiya, Atsunori; Kawada, Toru; Shimizu, Shuji; Sugimachi, Masaru

    2011-04-01

    Although the dynamic characteristics of the baroreflex system have been described by baroreflex transfer functions obtained from open-loop analysis, the predictability of time-series output dynamics from input signals, which should confirm the accuracy of system identification, remains to be elucidated. Moreover, despite theoretical concerns over closed-loop system identification, the accuracy and the predictability of the closed-loop spontaneous baroreflex transfer function have not been evaluated compared with the open-loop transfer function. Using urethane and α-chloralose anaesthetized, vagotomized and aortic-denervated rabbits (n = 10), we identified open-loop baroreflex transfer functions by recording renal sympathetic nerve activity (SNA) while varying the vascularly isolated intracarotid sinus pressure (CSP) according to a binary random (white-noise) sequence (operating pressure ± 20 mmHg), and using a simplified equation to calculate closed-loop-spontaneous baroreflex transfer function while matching CSP with systemic arterial pressure (AP). Our results showed that the open-loop baroreflex transfer functions for the neural and peripheral arcs predicted the time-series SNA and AP outputs from measured CSP and SNA inputs, with r2 of 0.8 ± 0.1 and 0.8 ± 0.1, respectively. In contrast, the closed-loop-spontaneous baroreflex transfer function for the neural arc was markedly different from the open-loop transfer function (enhanced gain increase and a phase lead), and did not predict the time-series SNA dynamics (r2; 0.1 ± 0.1). However, the closed-loop-spontaneous baroreflex transfer function of the peripheral arc partially matched the open-loop transfer function in gain and phase functions, and had limited but reasonable predictability of the time-series AP dynamics (r2, 0.7 ± 0.1). A numerical simulation suggested that a noise predominantly in the neural arc under resting conditions might be a possible mechanism responsible for our findings. Furthermore

  12. PredPPCrys: accurate prediction of sequence cloning, protein production, purification and crystallization propensity from protein sequences using multi-step heterogeneous feature fusion and selection.

    Directory of Open Access Journals (Sweden)

    Huilin Wang

    Full Text Available X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed 'PredPPCrys' using the support vector machine (SVM. Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I. Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II, which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization

  13. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  14. Stable, high-order SBP-SAT finite difference operators to enable accurate simulation of compressible turbulent flows on curvilinear grids, with application to predicting turbulent jet noise

    Science.gov (United States)

    Byun, Jaeseung; Bodony, Daniel; Pantano, Carlos

    2014-11-01

    Improved order-of-accuracy discretizations often require careful consideration of their numerical stability. We report on new high-order finite difference schemes using Summation-By-Parts (SBP) operators along with the Simultaneous-Approximation-Terms (SAT) boundary condition treatment for first and second-order spatial derivatives with variable coefficients. In particular, we present a highly accurate operator for SBP-SAT-based approximations of second-order derivatives with variable coefficients for Dirichlet and Neumann boundary conditions. These terms are responsible for approximating the physical dissipation of kinetic and thermal energy in a simulation, and contain grid metrics when the grid is curvilinear. Analysis using the Laplace transform method shows that strong stability is ensured with Dirichlet boundary conditions while weaker stability is obtained for Neumann boundary conditions. Furthermore, the benefits of the scheme is shown in the direct numerical simulation (DNS) of a Mach 1.5 compressible turbulent supersonic jet using curvilinear grids and skew-symmetric discretization. Particularly, we show that the improved methods allow minimization of the numerical filter often employed in these simulations and we discuss the qualities of the simulation.

  15. A New Strategy for Accurately Predicting I-V Electrical Characteristics of PV Modules Using a Nonlinear Five-Point Model

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents the modelling of electrical I-V response of illuminated photovoltaic crystalline modules. As an alternative method to the linear five-parameter model, our strategy uses advantages of a nonlinear analytical five-point model to take into account the effects of nonlinear variations of current with respect to solar irradiance and of voltage with respect to cells temperature. We succeeded in this work to predict with great accuracy the I-V characteristics of monocrystalline shell SP75 and polycrystalline GESOLAR GE-P70 photovoltaic modules. The good comparison of our calculated results to experimental data provided by the modules manufacturers makes it possible to appreciate the contribution of taking into account the nonlinear effect of operating conditions data on I-V characteristics of photovoltaic modules.

  16. Anthropometric variables accurately predict dual energy x-ray absorptiometric-derived body composition and can be used to screen for diabetes.

    Directory of Open Access Journals (Sweden)

    Reza Yavari

    Full Text Available The current world-wide epidemic of obesity has stimulated interest in developing simple screening methods to identify individuals with undiagnosed diabetes mellitus type 2 (DM2 or metabolic syndrome (MS. Prior work utilizing body composition obtained by sophisticated technology has shown that the ratio of abdominal fat to total fat is a good predictor for DM2 or MS. The goals of this study were to determine how well simple anthropometric variables predict the fat mass distribution as determined by dual energy x-ray absorptometry (DXA, and whether these are useful to screen for DM2 or MS within a population. To accomplish this, the body composition of 341 females spanning a wide range of body mass indices and with a 23% prevalence of DM2 and MS was determined using DXA. Stepwise linear regression models incorporating age, weight, height, waistline, and hipline predicted DXA body composition (i.e., fat mass, trunk fat, fat free mass, and total mass with good accuracy. Using body composition as independent variables, nominal logistic regression was then performed to estimate the probability of DM2. The results show good discrimination with the receiver operating characteristic (ROC having an area under the curve (AUC of 0.78. The anthropometrically-derived body composition equations derived from the full DXA study group were then applied to a group of 1153 female patients selected from a general endocrinology practice. Similar to the smaller study group, the ROC from logistical regression using body composition had an AUC of 0.81 for the detection of DM2. These results are superior to screening based on questionnaires and compare favorably with published data derived from invasive testing, e.g., hemoglobin A1c. This anthropometric approach offers promise for the development of simple, inexpensive, non-invasive screening to identify individuals with metabolic dysfunction within large populations.

  17. Full-Dimensional Potential Energy and Dipole Moment Surfaces of GeH4 Molecule and Accurate First-Principle Rotationally Resolved Intensity Predictions in the Infrared.

    Science.gov (United States)

    Nikitin, A V; Rey, M; Rodina, A; Krishna, B M; Tyuterev, Vl G

    2016-11-17

    Nine-dimensional potential energy surface (PES) and dipole moment surface (DMS) of the germane molecule are constructed using extended ab initio CCSD(T) calculations at 19 882 points. PES analytical representation is determined as an expansion in nonlinear symmetry adapted products of orthogonal and internal coordinates involving 340 parameters up to eighth order. Minor empirical refinement of the equilibrium geometry and of four quadratic parameters of the PES computed at the CCSD(T)/aug-cc-pVQZ-DK level of the theory yielded the accuracy below 1 cm(-1) for all experimentally known vibrational band centers of five stable isotopologues of (70)GeH4, (72)GeH4, (73)GeH4, (74)GeH4, and (76)GeH4 up to 8300 cm(-1). The optimized equilibrium bond re = 1.517 594 Å is very close to best ab initio values. Rotational energies up to J = 15 are calculated using potential expansion in normal coordinate tensors with maximum errors of 0.004 and 0.0006 cm(-1) for (74)GeH4 and (76)GeH4. The DMS analytical representation is determined through an expansion in symmetry-adapted products of internal nonlinear coordinates involving 967 parameters up to the sixth order. Vibration-rotation line intensities of five stable germane isotopologues were calculated from purely ab initio DMS using nuclear motion variational calculations with a full account of the tetrahedral symmetry of the molecules. For the first time a good overall agreement of main absorption features with experimental rotationally resolved Pacific Northwest National Laboratory spectra was achieved in the entire range of 700-5300 cm(-1). It was found that very accurate description of state-dependent isotopic shifts is mandatory to correctly describe complex patterns of observed spectra at natural isotopic abundance resulting from the superposition of five stable isotopologues. The data obtained in this work will be made available through the TheoReTS information system.

  18. Accurate prediction of hard-sphere virial coefficients B6 to B12 from a compressibility-based equation of state

    Science.gov (United States)

    Hansen-Goos, Hendrik

    2016-04-01

    We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equation of state deviates by up to 0.3% from simulations. The derivation uses the functional form of the isothermal compressibility from the Percus-Yevick closure of the Ornstein-Zernike relation as a starting point. Two additional degrees of freedom are introduced, which are constrained by requiring the equation of state to (i) recover the exact fourth virial coefficient B4 and (ii) involve only integer coefficients on the level of the ideal gas, while providing best possible agreement with the numerical result for B5. Virial coefficients B6 to B10 obtained from the equation of state are within 0.5% of numerical computations, and coefficients B11 and B12 are within the error of numerical results. We conjecture that even higher virial coefficients are reliably predicted.

  19. Statistical analysis of accurate prediction of local atmospheric optical attenuation with a new model according to weather together with beam wandering compensation system: a season-wise experimental investigation

    Science.gov (United States)

    Arockia Bazil Raj, A.; Padmavathi, S.

    2016-07-01

    Atmospheric parameters strongly affect the performance of Free Space Optical Communication (FSOC) system when the optical wave is propagating through the inhomogeneous turbulent medium. Developing a model to get an accurate prediction of optical attenuation according to meteorological parameters becomes significant to understand the behaviour of FSOC channel during different seasons. A dedicated free space optical link experimental set-up is developed for the range of 0.5 km at an altitude of 15.25 m. The diurnal profile of received power and corresponding meteorological parameters are continuously measured using the developed optoelectronic assembly and weather station, respectively, and stored in a data logging computer. Measured meteorological parameters (as input factors) and optical attenuation (as response factor) of size [177147 × 4] are used for linear regression analysis and to design the mathematical model that is more suitable to predict the atmospheric optical attenuation at our test field. A model that exhibits the R2 value of 98.76% and average percentage deviation of 1.59% is considered for practical implementation. The prediction accuracy of the proposed model is investigated along with the comparative results obtained from some of the existing models in terms of Root Mean Square Error (RMSE) during different local seasons in one-year period. The average RMSE value of 0.043-dB/km is obtained in the longer range dynamic of meteorological parameters variations.

  20. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  1. Prediction

    CERN Document Server

    Sornette, Didier

    2010-01-01

    This chapter first presents a rather personal view of some different aspects of predictability, going in crescendo from simple linear systems to high-dimensional nonlinear systems with stochastic forcing, which exhibit emergent properties such as phase transitions and regime shifts. Then, a detailed correspondence between the phenomenology of earthquakes, financial crashes and epileptic seizures is offered. The presented statistical evidence provides the substance of a general phase diagram for understanding the many facets of the spatio-temporal organization of these systems. A key insight is to organize the evidence and mechanisms in terms of two summarizing measures: (i) amplitude of disorder or heterogeneity in the system and (ii) level of coupling or interaction strength among the system's components. On the basis of the recently identified remarkable correspondence between earthquakes and seizures, we present detailed information on a class of stochastic point processes that has been found to be particu...

  2. Efficient and accurate fragmentation methods.

    Science.gov (United States)

    Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S

    2014-09-16

    mechanical calculations on fragment-fragment (dimer) interactions. The EFMO method is both more accurate and more computationally efficient than the most commonly used FMO implementation (FMO2), in which all dimers are explicitly included in the calculation. While the FMO2 method itself does not incorporate three-body interactions, such interactions are included in the EFMO method via the EFP self-consistent induction term. Several applications (ranging from clusters to proteins) of the three methods are discussed to demonstrate their efficacy. The EFMO method will be especially exciting once the analytic gradients have been completed, because this will allow geometry optimizations, the prediction of vibrational spectra, reaction path following, and molecular dynamics simulations using the method.

  3. Accurate colorimetric feedback for RGB LED clusters

    Science.gov (United States)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  4. Accurate backgrounds to Higgs production at the LHC

    CERN Document Server

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  5. Accurate Predictions of the NMR Parameters in Organic and Biological Crystallines%有机和生物晶体固态核磁共振参数的准确预测

    Institute of Scientific and Technical Information of China (English)

    何睿; 焦艳华; 梁媛嫒; 陈灿玉

    2011-01-01

    理论计算有助于复杂的有机和生物系统光谱的鉴定.对于核磁共振光谱,固体结晶中的化学位移和四极耦合常数(QCC)受到邻近的分子和晶格的氢键和范德华作用较大的影响,从而显示出与气态单体分子不同的NMR参数.因此,在固体晶体NMR参数的理论计算中有必要将氢键和范德华作用这两个因素考虑进来.基于周期性方法,本文采用L-Ala-Gly二肽和硝基苯晶体作为模型体系来考察该方法计算NMR参数的精度.研究结果显示周期结构模型能够将分子间的氢键和范德华作用考虑进来,得到的化学位移和QCC值明显优于传统的单分子模型和超分子模型得到的结果,采用该方法计算的结果能够重现NMR实验结果.%Theoretical predictions are helpful for the spectroscopic identification of complicated organic and biological systems.For nuclear magnetic resonance (NMR) parameters,however,the chemical shift and quadrupole coupling constant (QCC) of the solid crystals are considerably affected by hydrogen bonding and van der Waals interactions from neighboring molecules and the crystal lattice leading to significant spectroscopic differences compared to isolated monomer molecules.Therefore,it is necessary to take these two factors into account for the precise predictions of chemical shifts and QCCs of solid crystals.L-alanylglycine dipeptide and nitrobenzene were selected as model crystals to demonstrate these effects.Here,the chemical shielding (CS) and QCC data were calculated based on the periodic structure model.The incorporation of intermolecular hydrogen bonding and crystal lattice effects by periodic models was found to be crucial in obtaining reliable predictions of CS and QCC values and rendering more explicit spectroscopic assignments for solid organic and biological systems.

  6. Nonsurgical giant cell tumour of the tendon sheath or of the diffuse type: Are MRI or {sup 18}F-FDG PET/CT able to provide an accurate prediction of long-term outcome?

    Energy Technology Data Exchange (ETDEWEB)

    Dercle, Laurent [IUCT-Oncopole/Institut Claudius Regaud, Department of Nuclear Medicine, Toulouse (France); Institut Gustave Roussy, Department of Radiology, Villejuif (France); Institut Gustave Roussy, Department of Nuclear Medicine, Villejuif (France); Chisin, Roland [Hebrew University Hadassah Medical Center, Department of Medical Biophysics and Nuclear Medicine, Jerusalem (Israel); Ammari, Samy [Institut Gustave Roussy, Department of Radiology, Villejuif (France); Gillebert, Quentin [Hopital tenon, Hopitaux Universitaires Est Parisien, Department of Nuclear Medicine, Paris (France); Ouali, Monia [Institut Claudius Regaud, Department of Biostatistics, Toulouse (France); Jaudet, Cyril; Dierickx, Lawrence; Zerdoud, Slimane; Courbon, Frederic [IUCT-Oncopole/Institut Claudius Regaud, Department of Nuclear Medicine, Toulouse (France); Delord, Jean-Pierre [Institut Claudius Regaud, Department of Clinical Research, Toulouse (France); Schlumberger, Martin [Institut Gustave Roussy, Department of Nuclear Medicine, Villejuif (France)

    2014-11-01

    To investigate whether MRI (RECIST 1.1, WHO criteria and the volumetric approach) or {sup 18}F-FDG PET/CT (PERCIST 1.0) are able to predict long-term outcome in nonsurgical patients with giant cell tumour of the tendon sheath or of the diffuse type (GCT-TS/DT). Fifteen ''nonsurgical'' patients with a histological diagnosis of GCT-TS/DT were divided into two groups: symptomatic patients receiving targeted therapy and asymptomatic untreated patients. All 15 patients were evaluated by MRI of whom 10 were treated, and a subgroup of 7 patients were evaluated by PET/CT of whom 4 were treated. Early evolution was assessed according to MRI and PET/CT scans at baseline and during follow-up. Cohen's kappa coefficient was used to evaluate the degree of agreement between PERCIST 1.0, RECIST 1.1, WHO criteria, volumetric approaches and the reference standard (long-term outcome, delay 505 ± 457 days). The response rate in symptomatic patients with GCT-TS/DT receiving targeted therapy was also assessed in a larger population that included additional patients obtained from a review of the literature. The kappa coefficients for agreement between RECIST/WHO/volumetric criteria and outcome (15 patients) were respectively: 0.35 (p = 0.06), 0.26 (p = 0.17) and 0.26 (p = 0.17). In the PET/CT subgroup (7 patients), PERCIST was in perfect agreement with the late symptomatic evolution (kappa = 1, p < 0.05). In the treated symptomatic group including the additional patients from the literature the response rates to targeted therapies according to late symptomatic assessment, and PERCIST and RECIST criteria were: 65 % (22/34), 77 % (10/13) and 26 % (10/39). {sup 18}F-FDG PET/CT with PERCIST is a promising approach to the prediction of the long-term outcome in GCT-TS/DT and may avoid unnecessary treatments, toxicity and costs. On MRI, WHO and volumetric approaches are not more effective than RECIST using the current thresholds. (orig.)

  7. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    DEFF Research Database (Denmark)

    Westhall, Erik; Rossetti, Andrea O; van Rootselaar, Anne-Fleur

    2016-01-01

    OBJECTIVE: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. METHODS: In this cohort study, 4 EEG specialists......, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression...... with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3-5 until 180 days. RESULTS: Eight TTM sites randomized 202...

  8. Copeptin does not accurately predict disease severity in imported malaria

    NARCIS (Netherlands)

    M.E. van Wolfswinkel (Marlies); D.A. Hesselink (Dennis); E.J. Hoorn (Ewout); Y.B. de Rijke (Yolanda); R. Koelewijn (Rob); J.J. van Hellemond (Jaap); P.J.J. van Genderen (Perry)

    2012-01-01

    textabstractBackground: Copeptin has recently been identified to be a stable surrogate marker for the unstable hormone arginine vasopressin (AVP). Copeptin has been shown to correlate with disease severity in leptospirosis and bacterial sepsis. Hyponatraemia is common in severe imported malaria and

  9. An accurate empirical correlation for predicting natural gas compressibility factors

    Institute of Scientific and Technical Information of China (English)

    Ehsan Sanjari; Ebrahim Nemati Lay

    2012-01-01

    The compressibility factor of natural gas is an important parameter in many gas and petroleum engineering calculations.This study presents a new empirical model for quick calculation of natural gas compressibility factors.The model was derived from 5844 experimental data of compressibility factors for a range of pseudo reduced pressures from 0.01 to 15 and pseudo reduced temperatures from 1 to 3.The accuracy of the new empirical correlation has been compared with commonly used existing methods.The comparison indicates the superiority of the new empirical model over the other methods used to calculate compressibility factor of natural gas with average absolute relative deviation percent (AARD%) of 0.6535.

  10. Ethics and epistemology of accurate prediction in clinical research.

    Science.gov (United States)

    Hey, Spencer Phillips

    2015-07-01

    All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with the principles of ethical research.

  11. Accurate prediction of secondary metabolite gene clusters in filamentous fungi

    DEFF Research Database (Denmark)

    Andersen, Mikael Rørdam; Nielsen, Jakob Blæsbjerg; Klitgaard, Andreas

    2013-01-01

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify suppo...... used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom....

  12. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  13. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  14. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  15. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  16. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  17. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  18. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  19. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  20. The accurate definition of metabolic volumes on {sup 18}F-FDG-PET before treatment allows the response to chemoradiotherapy to be predicted in the case of oesophagus cancers; La definition precise des volumes metaboliques sur TEP au 18F-FDG avant traitement permet la prediction de la reponse a la chimioradiotherapie dans les cancers de l'oesophage

    Energy Technology Data Exchange (ETDEWEB)

    Hatt, M.; Cheze-Le Rest, C.; Visvikis, D. [Inserm U650, Brest (France); Pradier, O. [Radiotherapie, CHRU Morvan, Brest (France)

    2011-10-15

    This study aims at assessing the possibility of prediction of the response of locally advanced oesophagus cancers, even before the beginning of treatment, by using metabolic volume measurements performed on {sup 18}F-FDG PET images made before the treatment. Medical files of 50 patients have been analyzed. According to the observed responses, and to metabolic volume and Total Lesion Glycosis (TLG) values, it appears that the images allow the extraction of parameters, such as the TLG, which are criteria for the prediction of the therapeutic response. Short communication

  1. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  2. Optimal strategies for throwing accurately

    CERN Document Server

    Venkadesan, Madhusudhan

    2010-01-01

    Accuracy of throwing in games and sports is governed by how errors at projectile release are propagated by flight dynamics. To address the question of what governs the choice of throwing strategy, we use a simple model of throwing with an arm modelled as a hinged bar of fixed length that can release a projectile at any angle and angular velocity. We show that the amplification of deviations in launch parameters from a one parameter family of solution curves is quantified by the largest singular value of an appropriate Jacobian. This allows us to predict a preferred throwing style in terms of this singular value, which itself depends on target location and the target shape. Our analysis also allows us to characterize the trade-off between speed and accuracy despite not including any effects of signal-dependent noise. Using nonlinear calculations for propagating finite input-noise, we find that an underarm throw to a target leads to an undershoot, but an overarm throw does not. Finally, we consider the limit of...

  3. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  4. Prediction of Unsteady Transonic Aerodynamics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An accurate prediction of aero-elastic effects depends on an accurate prediction of the unsteady aerodynamic forces. Perhaps the most difficult speed regime is...

  5. Accurate lineshape spectroscopy and the Boltzmann constant.

    Science.gov (United States)

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-10-14

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  6. Simple and accurate analytical calculation of shortest path lengths

    CERN Document Server

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  7. Accurate multireference study of Si3 electronic manifold

    CERN Document Server

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  8. Predicting toxicity of nanoparticles

    OpenAIRE

    BURELLO ENRICO; Worth, Andrew

    2011-01-01

    A statistical model based on a quantitative structure–activity relationship accurately predicts the cytotoxicity of various metal oxide nanoparticles, thus offering a way to rapidly screen nanomaterials and prioritize testing.

  9. More-Accurate Model of Flows in Rocket Injectors

    Science.gov (United States)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  10. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  11. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  12. Understanding the Code: keeping accurate records.

    Science.gov (United States)

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.

  13. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  14. Accurate Interatomic Force Fields via Machine Learning with Covariant Kernels

    CERN Document Server

    Glielmo, Aldo; De Vita, Alessandro

    2016-01-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian Process (GP) Regression. This is based on matrix-valued kernel functions, to which we impose that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such "covariant" GP kernels can be obtained by integration over the elements of the rotation group SO(d) for the relevant dimensionality d. Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni and Fe crystalline...

  15. Accurate studies on dissociation energies of diatomic molecules

    Institute of Scientific and Technical Information of China (English)

    SUN; WeiGuo; FAN; QunChao

    2007-01-01

    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  16. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  17. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    Science.gov (United States)

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  18. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  19. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi

  20. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  1. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  2. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  3. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  4. Accurate Control of Josephson Phase Qubits

    Science.gov (United States)

    2016-04-14

    61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University

  5. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  6. Synthesizing Accurate Floating-Point Formulas

    OpenAIRE

    Ioualalen, Arnault; Martel, Matthieu

    2013-01-01

    International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...

  7. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  8. Programming Useful Life Prediction (PULP) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Accurately predicting Remaining Useful Life (RUL) provides significant benefits—it increases safety and reduces financial and labor resource requirements....

  9. GTfold: Enabling parallel RNA secondary structure prediction on multi-core desktops

    DEFF Research Database (Denmark)

    Swenson, M Shel; Anderson, Joshua; Ash, Andrew

    2012-01-01

    Accurate and efficient RNA secondary structure prediction remains an important open problem in computational molecular biology. Historically, advances in computing technology have enabled faster and more accurate RNA secondary structure predictions. Previous parallelized prediction programs achie...

  10. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  11. How accurate are the weather forecasts for Bierun (southern Poland)?

    Science.gov (United States)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  12. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  13. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...... are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup...

  14. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  15. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  16. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  17. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  18. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    Science.gov (United States)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  19. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    CERN Document Server

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  20. An accurate potential energy curve for helium based on ab initio calculations

    Science.gov (United States)

    Janzen, A. R.; Aziz, R. A.

    1997-07-01

    Korona, Williams, Bukowski, Jeziorski, and Szalewicz [J. Chem. Phys. 106, 1 (1997)] constructed a completely ab initio potential for He2 by fitting their calculations using infinite order symmetry adapted perturbation theory at intermediate range, existing Green's function Monte Carlo calculations at short range and accurate dispersion coefficients at long range to a modified Tang-Toennies potential form. The potential with retardation added to the dipole-dipole dispersion is found to predict accurately a large set of microscopic and macroscopic experimental data. The potential with a significantly larger well depth than other recent potentials is judged to be the most accurate characterization of the helium interaction yet proposed.

  1. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  2. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  3. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  4. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  5. Accurate taxonomic assignment of short pyrosequencing reads.

    Science.gov (United States)

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  6. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Science.gov (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  7. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  8. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Science.gov (United States)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  9. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  10. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  11. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  12. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  13. Fast and accurate exhaled breath ammonia measurement.

    Science.gov (United States)

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  14. Noninvasive hemoglobin monitoring: how accurate is enough?

    Science.gov (United States)

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  15. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  16. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  17. Towards Accurate Modeling of Moving Contact Lines

    CERN Document Server

    Holmgren, Hanna

    2015-01-01

    A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...

  18. Does a pneumotach accurately characterize voice function?

    Science.gov (United States)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  19. Accurate upper body rehabilitation system using kinect.

    Science.gov (United States)

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  20. Accurate thermoplasmonic simulation of metallic nanoparticles

    Science.gov (United States)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  1. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

  2. ACCURATE FORECAST AS AN EFFECTIVE WAY TO REDUCE THE ECONOMIC RISK OF AGRO-INDUSTRIAL COMPLEX

    Directory of Open Access Journals (Sweden)

    Kymratova A. M.

    2014-11-01

    Full Text Available This article discusses the ways of reducing the financial, economic and social risks on the basis of an accurate prediction. We study the importance of natural time series of winter wheat yield, minimum winter, winter-spring daily temperatures. The feature of the time series of this class is disobeying a normal distribution, there is no visible trend

  3. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  4. Optimizing cell arrays for accurate functional genomics

    Directory of Open Access Journals (Sweden)

    Fengler Sven

    2012-07-01

    Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.

  5. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  6. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  7. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  8. Accurate Segmentation for Infrared Flying Bird Tracking

    Institute of Scientific and Technical Information of China (English)

    ZHENG Hong; HUANG Ying; LING Haibin; ZOU Qi; YANG Hao

    2016-01-01

    Bird strikes present a huge risk for air ve-hicles, especially since traditional airport bird surveillance is mainly dependent on inefficient human observation. For improving the effectiveness and efficiency of bird monitor-ing, computer vision techniques have been proposed to detect birds, determine bird flying trajectories, and pre-dict aircraft takeoff delays. Flying bird with a huge de-formation causes a great challenge to current tracking al-gorithms. We propose a segmentation based approach to enable tracking can adapt to the varying shape of bird. The approach works by segmenting object at a region of inter-est, where is determined by the object localization method and heuristic edge information. The segmentation is per-formed by Markov random field, which is trained by fore-ground and background mixture Gaussian models. Exper-iments demonstrate that the proposed approach provides the ability to handle large deformations and outperforms the m ost state-of-the-art tracker in the infrared flying bird tracking problem.

  9. Accurate simulation of optical properties in dyes.

    Science.gov (United States)

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them.

  10. Solar Cycle Predictions

    Science.gov (United States)

    Pesnell, William Dean

    2012-01-01

    Solar cycle predictions are needed to plan long-term space missions; just like weather predictions are needed to plan the launch. Fleets of satellites circle the Earth collecting many types of science data, protecting astronauts, and relaying information. All of these satellites are sensitive at some level to solar cycle effects. Predictions of drag on LEO spacecraft are one of the most important. Launching a satellite with less propellant can mean a higher orbit, but unanticipated solar activity and increased drag can make that a Pyrrhic victory as you consume the reduced propellant load more rapidly. Energetic events at the Sun can produce crippling radiation storms that endanger all assets in space. Solar cycle predictions also anticipate the shortwave emissions that cause degradation of solar panels. Testing solar dynamo theories by quantitative predictions of what will happen in 5-20 years is the next arena for solar cycle predictions. A summary and analysis of 75 predictions of the amplitude of the upcoming Solar Cycle 24 is presented. The current state of solar cycle predictions and some anticipations how those predictions could be made more accurate in the future will be discussed.

  11. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    Science.gov (United States)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  12. Guidelines for accurate EC50/IC50 estimation.

    Science.gov (United States)

    Sebaugh, J L

    2011-01-01

    This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.

  13. Accurate Classification of RNA Structures Using Topological Fingerprints

    Science.gov (United States)

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  14. How accurate are our assumptions about our students' background knowledge?

    Science.gov (United States)

    Rovick, A A; Michael, J A; Modell, H I; Bruce, D S; Horwitz, B; Adamson, T; Richardson, D R; Silverthorn, D U; Whitescarver, S A

    1999-06-01

    Teachers establish prerequisites that students must meet before they are permitted to enter their courses. It is expected that having these prerequisites will provide students with the knowledge and skills they will need to successfully learn the course content. Also, the material that the students are expected to have previously learned need not be included in a course. We wanted to determine how accurate instructors' understanding of their students background knowledge actually was. To do this, we wrote a set of multiple-choice questions that could be used to test students' knowledge of concepts deemed to be essential for learning respiratory physiology. Instructors then selected 10 of these questions to be used as a prerequisite knowledge test. The instructors also predicted the performance they expected from the students on each of the questions they had selected. The resulting tests were administered in the first week of each of seven courses. The results of this study demonstrate that instructors are poor judges of what beginning students know. Instructors tended to both underestimate and overestimate students' knowledge by large margins on individual questions. Although on the average they tended to underestimate students' factual knowledge, they overestimated the students' abilities to apply this knowledge. Hence, the validity of decisions that instructors make, predicated on the basis of their students having the prerequisite knowledge that they expect, is open to question.

  15. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  16. Accurate measurement of liquid transport through nanoscale conduits

    Science.gov (United States)

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-01-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404

  17. Accurate determination of segmented X-ray detector geometry.

    Science.gov (United States)

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-02

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments.

  18. Accurate Sorption Measurements on Coal and Activated Carbon using Accurate Equations of State

    NARCIS (Netherlands)

    Battistutta, E.

    2011-01-01

    Since industrial revolution, due to the increasing demand of energy, anthropogenic emissions in the atmosphere are constantly growing. The International Energy Agency (IEA) predicted a 57% increase of energy demand from 2004 to 2030 (IEA, 2004) of which, 85% consists of fossil fuels. Actions need to

  19. Few crop traits accurately predict variables important to productivity of processing sweet corn

    Science.gov (United States)

    Recovery, case production, and gross profit margin, hereafter called ‘processor variables’, are as important metrics to processing sweet corn as grain yield is to field corn production. However, crop traits such as ear number or ear mass alone are reported in sweet corn production research rather t...

  20. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes.

    Science.gov (United States)

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences.

  1. Accurate prediction of a minimal region around a genetic association signal that contains the causal variant

    NARCIS (Netherlands)

    Z. Bochdanovits (Zoltan); J. Simón-Sánchez (Javier); M.A. Jonker (Marianne); W.J.G. Hoogendijk (Witte); A. van der Vaart (Aad); P. Heutink (Peter)

    2014-01-01

    textabstractIn recent years, genome-wide association studies have been very successful in identifying loci for complex traits. However, typically these findings involve noncoding and/or intergenic SNPs without a clear functional effect that do not directly point to a gene. Hence, the challenge is to

  2. Fast and Accurate Pressure-Drop Prediction in Straightened Atherosclerotic Coronary Arteries

    NARCIS (Netherlands)

    J.T.C. Schrauwen (Jelle); D. Koeze (Dion); J.J. Wentzel (Jolanda); F.N. van de Vosse (Frans); A.F.W. van der Steen (Ton); F.J.H. Gijsen (Frank)

    2014-01-01

    textabstractAtherosclerotic disease progression in coronary arteries is influenced by wall shear stress. To compute patient-specific wall shear stress, computational fluid dynamics (CFD) is required. In this study we propose a method for computing the pressure-drop in regions proximal and distal to

  3. Dynamics of Flexible MLI-type Debris for Accurate Orbit Prediction

    Science.gov (United States)

    2014-09-01

    that this type of debris is made of multi-layer insulation, but there is no evidence to confirm so. Further experiments were conducted by Murakami et...mass distribution. Some evidence supports that the mass distribution concerning material in the experiments of the study of Murakami [17] and support...degradation of TEFLON FEP returned from the hubble space telescope, in 36th Aerospace Sciences Meeting&Exhibit. 1998. 17. Murakami , J., et al

  4. Accurate Dynamic Response Predictions of Plug-and-Play Sat I

    Science.gov (United States)

    2010-03-01

    electroynamic shaker, setup to perform as an automatic ping hammer. These impulsive s ingle strike impacts are s et t o occur ev ery t en s econds to...modes allows for a visualization of each particular node and erroneous data can easily be discovered and eliminated. T ypically, these bad data...function, s ingle value approaching 300, t uned mode di vided b y s mall e rroneous da ta ( b) R emoved ba d d ata poi nts a ll poi nts w ith + /- 3

  5. Accurate Stabilities of Laccase Mutants Predicted with a Modified FoldX Protocol

    DEFF Research Database (Denmark)

    Christensen, Niels Johan; Kepp, Kasper Planeta

    2012-01-01

    ) with up to 11 simultaneously mutated sites with good correlation against experimental stability trends. Molecular dynamics simulations of the two laccases show that FoldX is very structure-sensitive, since all mutants and the wild-type must share structural configuration to avoid artifacts of local...... function: Thus, we show in this work that a substantial “hysteresis” of ~1 kcal/mol applies to FoldX, and that an improved protocol that reverses calculations and uses the average obtained ∆∆G enhances correlation with the experimental data. As glycosylation is ignored in FoldX, its effect on ∆∆G must...... be additive to the amino acid mutations. Quantitative structure-property relationships of the FoldX energy components produced a substantially improved laccase stability predictor with errors of ~1 ○C for T50, vs. 3-5 ○C for a standard FoldX protocol. The developed model provides insight into the physical...

  6. Accurate Prediction of Transimpedances and Equivalent Input Noise Current Densities of Tuned Optical Receiver Front Ends

    DEFF Research Database (Denmark)

    Liu, Qing Zhong

    1991-01-01

    Novel analytical expressions have been derived for calculating transimpedances and equivalent input noise current densities of five tuned optical receiver front ends based on PIN diode and MESFETs or HEMTs. Miller's capacitance, which has been omitted in previous studies, has been taken into acco...... into account. The accuracy of the expressions has been verified by using Touchstone simulator. The agreement between the calculated and simulated front end performances is very good....

  7. Protein corona composition does not accurately predict hematocompatibility of colloidal gold nanoparticles.

    Science.gov (United States)

    Dobrovolskaia, Marina A; Neun, Barry W; Man, Sonny; Ye, Xiaoying; Hansen, Matthew; Patri, Anil K; Crist, Rachael M; McNeil, Scott E

    2014-10-01

    Proteins bound to nanoparticle surfaces are known to affect particle clearance by influencing immune cell uptake and distribution to the organs of the mononuclear phagocytic system. The composition of the protein corona has been described for several types of nanomaterials, but the role of the corona in nanoparticle biocompatibility is not well established. In this study we investigate the role of nanoparticle surface properties (PEGylation) and incubation times on the protein coronas of colloidal gold nanoparticles. While neither incubation time nor PEG molecular weight affected the specific proteins in the protein corona, the total amount of protein binding was governed by the molecular weight of PEG coating. Furthermore, the composition of the protein corona did not correlate with nanoparticle hematocompatibility. Specialized hematological tests should be used to deduce nanoparticle hematotoxicity. From the clinical editor: It is overall unclear how the protein corona associated with colloidal gold nanoparticles may influence hematotoxicity. This study warns that PEGylation itself may be insufficient, because composition of the protein corona does not directly correlate with nanoparticle hematocompatibility. The authors suggest that specialized hematological tests must be used to deduce nanoparticle hematotoxicity.

  8. Can Medical Students Accurately Predict Their Learning? A Study Comparing Perceived and Actual Performance in Neuroanatomy

    Science.gov (United States)

    Hall, Samuel R.; Stephens, Jonny R.; Seaby, Eleanor G.; Andrade, Matheus Gesteira; Lowry, Andrew F.; Parton, Will J. C.; Smith, Claire F.; Border, Scott

    2016-01-01

    It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second…

  9. Raoult's law revisited: accurately predicting equilibrium relative humidity points for humidity control experiments

    CERN Document Server

    Bowler, Michael G; Bowler, Matthew W

    2016-01-01

    The equilibrium relative humidity values for a number of the most commonly used precipitants in biological macromolecule crystallisation have been measured using a new humidity control device. A simple argument in statistical mechanics demonstrates that the saturated vapour pressure of a solvent is proportional to its mole fraction in an ideal solution (Raoult's Law). The same argument can be extended to the case where solvent and solute molecules are of different size.

  10. How accurate are the nonlinear chemical Fokker-Planck and chemical Langevin equations?

    Science.gov (United States)

    Grima, Ramon; Thomas, Philipp; Straube, Arthur V

    2011-08-28

    The chemical Fokker-Planck equation and the corresponding chemical Langevin equation are commonly used approximations of the chemical master equation. These equations are derived from an uncontrolled, second-order truncation of the Kramers-Moyal expansion of the chemical master equation and hence their accuracy remains to be clarified. We use the system-size expansion to show that chemical Fokker-Planck estimates of the mean concentrations and of the variance of the concentration fluctuations about the mean are accurate to order Ω(-3∕2) for reaction systems which do not obey detailed balance and at least accurate to order Ω(-2) for systems obeying detailed balance, where Ω is the characteristic size of the system. Hence, the chemical Fokker-Planck equation turns out to be more accurate than the linear-noise approximation of the chemical master equation (the linear Fokker-Planck equation) which leads to mean concentration estimates accurate to order Ω(-1∕2) and variance estimates accurate to order Ω(-3∕2). This higher accuracy is particularly conspicuous for chemical systems realized in small volumes such as biochemical reactions inside cells. A formula is also obtained for the approximate size of the relative errors in the concentration and variance predictions of the chemical Fokker-Planck equation, where the relative error is defined as the difference between the predictions of the chemical Fokker-Planck equation and the master equation divided by the prediction of the master equation. For dimerization and enzyme-catalyzed reactions, the errors are typically less than few percent even when the steady-state is characterized by merely few tens of molecules.

  11. Accurate Jones Matrix of the Practical Faraday Rotator

    Institute of Scientific and Technical Information of China (English)

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  12. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  13. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  14. An effective method to accurately calculate the phase space factors for $\\beta^- \\beta^-$ decay

    CERN Document Server

    Neacsu, Andrei

    2015-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates, and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  15. Accurate source location from waves scattered by surface topography

    Science.gov (United States)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  16. Accurate source location from P waves scattered by surface topography

    Science.gov (United States)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  17. Prediction of tides using back-propagation neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    Prediction of tides is very much essential for human activities and to reduce the construction cost in marine environment. This paper presents an application of the artificial neural network with back-propagation procedures for accurate prediction...

  18. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    CERN Document Server

    Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang

    2012-01-01

    In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...

  19. Predicting future of predictive analysis

    OpenAIRE

    Piyush, Duggal

    2014-01-01

    With enormous growth in analytical data and insight about advantage of managing future brings Predictive Analysis in picture. It really has potential to be called one of efficient and competitive technologies that give an edge to business operations. The possibility to predict future market conditions and to know customers’ needs and behavior in advance is the area of interest of every organization. Other areas of interest may be maintenance prediction where we tend to predict when and where ...

  20. Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator

    DEFF Research Database (Denmark)

    Xin, Zhen; Yoon, Changwoo; Zhao, Rende

    2016-01-01

    -signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process...

  1. Fabricating an Accurate Implant Master Cast: A Technique Report.

    Science.gov (United States)

    Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F

    2015-12-01

    The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.

  2. Deterministic prediction of surface wind speed variations

    Science.gov (United States)

    Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.

    2014-11-01

    Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  3. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  4. Multi-objective optimization of inverse planning for accurate radiotherapy

    Institute of Scientific and Technical Information of China (English)

    曹瑞芬; 吴宜灿; 裴曦; 景佳; 李国丽; 程梦云; 李贵; 胡丽琴

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment pl

  5. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    Science.gov (United States)

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has been reviewed by Thanai ... rhinitis known as hay fever is caused by pollen carried in the air during different times of ...

  6. Digital system accurately controls velocity of electromechanical drive

    Science.gov (United States)

    Nichols, G. B.

    1965-01-01

    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.

  7. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  8. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  9. Mass spectrometry based protein identification with accurate statistical significance assignment

    OpenAIRE

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  10. High-Accurate, Physics-Based Wake Simulation Techniques

    Science.gov (United States)

    2015-01-27

    physically accurate problem as well as to show that the sensor can account for artificial viscosity where needed but not overload the problem and ”wash out...code) 1/27/2015 Final Technical Report 02/25/10 - 08/31/14 High-Accurate, Physics -Based Wake Simulation Techniques N00014-10-C-0190 Andrew Shelton...code was developed that utilizes the discontinuous Galerkin method to solve the Euler equations while utilizing a modal artificial viscosity sensor

  11. Predictive medicine

    NARCIS (Netherlands)

    Boenink, Marianne; Have, ten Henk

    2015-01-01

    In the last part of the twentieth century, predictive medicine has gained currency as an important ideal in biomedical research and health care. Research in the genetic and molecular basis of disease suggested that the insights gained might be used to develop tests that predict the future health sta

  12. Essays on Earnings Predictability

    DEFF Research Database (Denmark)

    Bruun, Mark

    affect the accuracy of analysts´earnings forecasts. Finally, the objective of the dissertation is to investigate how the stock market is affected by the accuracy of corporate earnings projections. The dissertation contributes to a deeper understanding of these issues. First, it is shown how earnings...... of analysts’ earnings forecasts. Furthermore, the dissertation shows how the stock market’s reaction to the disclosure of information about corporate earnings depends on how well corporate earnings can be predicted. The dissertation indicates that the stock market’s reaction to the disclosure of earnings...... forecasts are not more accurate than the simpler forecasts based on a historical timeseries of earnings. Secondly, the dissertation shows how accounting standards affect analysts’ earnings predictions. Accounting conservatism contributes to a more volatile earnings process, which lowers the accuracy...

  13. Improved query difficulty prediction for the web

    NARCIS (Netherlands)

    Hauff, C.; Murdock, V.; Baeza-Yates, R.

    2008-01-01

    Query performance prediction aims to predict whether a query will have a high average precision given retrieval from a particular collection, or low average precision. An accurate estimator of the quality of search engine results can allow the search engine to decide to which queries to apply query

  14. Protein docking prediction using predicted protein-protein interface

    Directory of Open Access Journals (Sweden)

    Li Bin

    2012-01-01

    Full Text Available Abstract Background Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. Results We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm, is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. Conclusion We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  15. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    Directory of Open Access Journals (Sweden)

    Jinying Jia

    2014-01-01

    Full Text Available This paper tackles location privacy protection in current location-based services (LBS where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user’s accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR, nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user’s accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  16. Nonexposure accurate location K-anonymity algorithm in LBS.

    Science.gov (United States)

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  17. Are patient-specific joint and inertial parameters necessary for accurate inverse dynamics analyses of gait?

    Science.gov (United States)

    Reinbolt, Jeffrey A; Haftka, Raphael T; Chmielewski, Terese L; Fregly, Benjamin J

    2007-05-01

    Variations in joint parameter (JP) values (axis positions and orientations in body segments) and inertial parameter (IP) values (segment masses, mass centers, and moments of inertia) as well as kinematic noise alter the results of inverse dynamics analyses of gait. Three-dimensional linkage models with joint constraints have been proposed as one way to minimize the effects of noisy kinematic data. Such models can also be used to perform gait optimizations to predict post-treatment function given pre-treatment gait data. This study evaluates whether accurate patient-specific JP and IP values are needed in three-dimensional linkage models to produce accurate inverse dynamics results for gait. The study was performed in two stages. First, we used optimization analyses to evaluate whether patient-specific JP and IP values can be calibrated accurately from noisy kinematic data, and second, we used Monte Carlo analyses to evaluate how errors in JP and IP values affect inverse dynamics calculations. Both stages were performed using a dynamic, 27 degrees-of-freedom, full-body linkage model and synthetic (i.e., computer generated) gait data corresponding to a nominal experimental gait motion. In general, JP but not IP values could be found accurately from noisy kinematic data. Root-mean-square (RMS) errors were 3 degrees and 4 mm for JP values and 1 kg, 22 mm, and 74 500 kg * mm2 for IP values. Furthermore, errors in JP but not IP values had a significant effect on calculated lower-extremity inverse dynamics joint torques. The worst RMS torque error averaged 4% bodyweight * height (BW * H) due to JP variations but less than 0.25% (BW * H) due to IP variations. These results suggest that inverse dynamics analyses of gait utilizing linkage models with joint constraints should calibrate the model's JP values to obtain accurate joint torques.

  18. Accurate level set method for simulations of liquid atomization☆

    Institute of Scientific and Technical Information of China (English)

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  19. Memory conformity affects inaccurate memories more than accurate memories.

    Science.gov (United States)

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  20. Accurate nuclear radii and binding energies from a chiral interaction

    CERN Document Server

    Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W

    2015-01-01

    The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  1. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    Science.gov (United States)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  2. Highly Accurate Measurement of the Electron Orbital Magnetic Moment

    CERN Document Server

    Awobode, A M

    2015-01-01

    We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.

  3. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  4. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    Science.gov (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  5. Accurate adapted Gaussian basis sets for the atoms from H through Xe

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, F.E.; Muniz, E.P. [Univ. Federal do Espirito Santo, Vitoria, Espirito Santo (Brazil). Dept. de Fisica-CCE

    1999-02-05

    The authors have applied the generator coordinate Hartree-Fock method to generate adapted Gaussian basis sets for the atoms from H through Xe. The Griffin-Hill-Wheeler-Hartree-Fock equations are integrated numerically generating accurate basis sets for these atoms. The atomic wave functions are an improvement over those of Clementi et al. using larger atom-optimized geometrical Gaussian basis sets and Jorge et al. using a universal Gaussian basis set. In all cases, the current wave functions predict total energy results within 6.13 {times} 10{sup {minus}4} hartree of the numerical Hartree-Fock limit.

  6. A Verilog-A Based Fractional Frequency Synthesizer Model for Fast and Accurate Noise Assessment

    Directory of Open Access Journals (Sweden)

    V. R. Gonzalez-Diaz

    2016-04-01

    Full Text Available This paper presents a new strategy to simulate fractional frequency synthesizer behavioral models with better performance and reduced simulation time. The models are described in Verilog-A with accurate phase noise predictions and they are based on a time jitter to power spectral density transformation of the principal noise sources in a synthesizer. The results of a fractional frequency synthesizer simulation is compared with state of the art Verilog-A descriptions showing a reduction of nearly 20 times. In addition, experimental results of a fractional frequency synthesizer are compared to the simulation results to validate the proposed model.

  7. Further result in the fast and accurate estimation of single frequency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new fast and accurate method for estimating the frequency of a complex sinusoid in complex white Gaussian environments is proposed.The new estimator comprises of applications of low-pass filtering,decimation, and frequency estimation by linear prediction.It is computationally efficient yet obtains the Cramer-Rao bound at moderate signal-to-noise ratios.And it is well suited for real time applications requiring precise frequency estimation.Simulation results are included to demonstrate the performance of the proposed method.

  8. Toward an accurate description of solid-state properties of superheavy elements

    Directory of Open Access Journals (Sweden)

    Schwerdtfeger Peter

    2016-01-01

    Full Text Available In the last two decades cold and hot fusion experiments lead to the production of new elements for the Periodic Table up to nuclear charge 118. Recent developments in relativistic quantum theory have made it possible to obtain accurate electronic properties for the trans-actinide elements with the aim to predict their potential chemical and physical behaviour. Here we report on first results of solid-state calculations for Og (element 118 to support future atom-at-a-time gas-phase adsorption experiments on surfaces such as gold or quartz.

  9. Techniques for determining propulsion system forces for accurate high speed vehicle drag measurements in flight

    Science.gov (United States)

    Arnaiz, H. H.

    1975-01-01

    As part of a NASA program to evaluate current methods of predicting the performance of large, supersonic airplanes, the drag of the XB-70 airplane was measured accurately in flight at Mach numbers from 0.75 to 2.5. This paper describes the techniques used to determine engine net thrust and the drag forces charged to the propulsion system that were required for the in-flight drag measurements. The accuracy of the measurements and the application of the measurement techniques to aircraft with different propulsion systems are discussed. Examples of results obtained for the XB-70 airplane are presented.

  10. Method of accurate grinding for single enveloping TI worm

    Institute of Scientific and Technical Information of China (English)

    SUN; Yuehai; ZHENG; Huijiang; BI; Qingzhen; WANG; Shuren

    2005-01-01

    TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.

  11. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering...... the variation of permittivity due to small thicknesses. Even if complex, such strategy gives accurate results and we believe that, after more refinement, can be used to completely calculate a complex metallic structure placed on a substrate in a far faster manner than full simulations programs do....

  12. Deterministic prediction of surface wind speed variations

    OpenAIRE

    Drisya, G. V.; Kiplangat, D. C.; Asokan, K; K. Satheesh Kumar

    2014-01-01

    Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that determin...

  13. Shape Prediction Linear Algorithm Using Fuzzy

    Directory of Open Access Journals (Sweden)

    Navjot Kaur

    2012-10-01

    Full Text Available The goal of the proposed method is to develop shape prediction algorithm using fuzzy that is computationally fast and invariant. To predict the overlapping and joined shapes accurately, a method of shape prediction based on erosion and over segmentation is used to estimate values for dependent variables from previously unseen predictor values based on the variation in an underlying learning data set.

  14. Interaction potentials for water from accurate cluster calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xantheas, Sotiris S.

    2006-03-15

    The abundance of water in nature, its function as a universal solvent and its role in many chemical and biological processes that are responsible for sustaining life on earth is the driving force behind the need for understanding its behavior under different conditions, and in various environments. The availability of models that describe the properties of either pure water/ice or its mixtures with a variety of solutes ranging from simple chemical species to complex biological molecules and environmental interfaces is therefore crucial in order to be able to develop predictive paradigms that attempt to model solvation and reaction and transport in aqueous environments. In attempting to develop these models the question naturally arises 'is water different/more complex than other hydrogen bonded liquids'. This proposition has been suggested based on the 'anomalous' behavior of its macroscopic properties such as the density maximum at 4 C, the non-monotonic behavior of its compressibility with temperature, the anomalous behavior of its relaxation time below typical temperatures of the human body, the large value and non-monotonic dependence below 35 C of the specific heat of constant pressure, the smaller than expected value of the coefficient of thermal expansion. This suggestion infers that simple models used to describe the relevant inter- and intra-molecular interactions will not suffice in order to reproduce the behavior of these properties under a wide temperature range. To this end, explicit microscopic level detailed information needs to be incorporated into the models in order to capture the appropriate physics at the molecular level. From the simple model of Bernal and Fowler, which was the first attempt to develop an empirical model for water back in 1933, this process has yielded ca. 50 different models to date. A recent review provides a nearly complete account of this effort coupled to the milestones in the area of molecular

  15. Solar Cycle Predictions (Invited Review)

    Science.gov (United States)

    Pesnell, W. Dean

    2012-11-01

    Solar cycle predictions are needed to plan long-term space missions, just as weather predictions are needed to plan the launch. Fleets of satellites circle the Earth collecting many types of science data, protecting astronauts, and relaying information. All of these satellites are sensitive at some level to solar cycle effects. Predictions of drag on low-Earth orbit spacecraft are one of the most important. Launching a satellite with less propellant can mean a higher orbit, but unanticipated solar activity and increased drag can make that a Pyrrhic victory as the reduced propellant load is consumed more rapidly. Energetic events at the Sun can produce crippling radiation storms that endanger all assets in space. Solar cycle predictions also anticipate the shortwave emissions that cause degradation of solar panels. Testing solar dynamo theories by quantitative predictions of what will happen in 5 - 20 years is the next arena for solar cycle predictions. A summary and analysis of 75 predictions of the amplitude of the upcoming Solar Cycle 24 is presented. The current state of solar cycle predictions and some anticipations of how those predictions could be made more accurate in the future are discussed.

  16. Link prediction via generalized coupled tensor factorisation

    DEFF Research Database (Denmark)

    Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.

    2012-01-01

    This study deals with the missing link prediction problem: the problem of predicting the existence of missing connections between entities of interest. We address link prediction using coupled analysis of relational datasets represented as heterogeneous data, i.e., datasets in the form of matrices...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....

  17. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    Science.gov (United States)

    Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.

    2016-06-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.

  18. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    CERN Document Server

    Mead, Alexander; Lombriser, Lucas; Peacock, John; Steele, Olivia; Winther, Hans

    2016-01-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead (2015b). We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo model method can predict the non-linear matter power spectrum measured from simulations of parameterised $w(a)$ dark energy models at the few per cent level for $k0.5\\,h\\mathrm{Mpc}^{-1}$. An updated version of our publicly available HMcode can be found at https://github.com/alexander-mead/HMcode

  19. Modeling Battery Behavior for Accurate State-of-Charge Indication

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Veld, op het J.H.G.; Regtien, P.P.L.; Danilov, D.; Notten, P.H.L.

    2006-01-01

    Li-ion is the most commonly used battery chemistry in portable applications nowadays. Accurate state-of-charge (SOC) and remaining run-time indication for portable devices is important for the user's convenience and to prolong the lifetime of batteries. A new SOC indication system, combining the ele

  20. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...

  1. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    NARCIS (Netherlands)

    Jose, J.; Willemink, G.H.; Steenbergen, W.; Leeuwen, van A.G.J.M.; Manohar, S.

    2012-01-01

    Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors pres

  2. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  3. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  4. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  5. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  6. $H_{2}^{+}$ ion in strong magnetic field an accurate calculation

    CERN Document Server

    López, J C; Turbiner, A V

    1997-01-01

    Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.

  7. Fast, Accurate and Detailed NoC Simulations

    NARCIS (Netherlands)

    Wolkotte, P.T.; Hölzenspies, P.K.F.; Smit, G.J.M.; Kellenberger, P.

    2007-01-01

    Network-on-Chip (NoC) architectures have a wide variety of parameters that can be adapted to the designer's requirements. Fast exploration of this parameter space is only possible at a high-level and several methods have been proposed. Cycle and bit accurate simulation is necessary when the actual r

  8. Accurate Period Approximation for Any Simple Pendulum Amplitude

    Institute of Scientific and Technical Information of China (English)

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  9. On accurate boundary conditions for a shape sensitivity equation method

    Science.gov (United States)

    Duvigneau, R.; Pelletier, D.

    2006-01-01

    This paper studies the application of the continuous sensitivity equation method (CSEM) for the Navier-Stokes equations in the particular case of shape parameters. Boundary conditions for shape parameters involve flow derivatives at the boundary. Thus, accurate flow gradients are critical to the success of the CSEM. A new approach is presented to extract accurate flow derivatives at the boundary. High order Taylor series expansions are used on layered patches in conjunction with a constrained least-squares procedure to evaluate accurate first and second derivatives of the flow variables at the boundary, required for Dirichlet and Neumann sensitivity boundary conditions. The flow and sensitivity fields are solved using an adaptive finite-element method. The proposed methodology is first verified on a problem with a closed form solution obtained by the Method of Manufactured Solutions. The ability of the proposed method to provide accurate sensitivity fields for realistic problems is then demonstrated. The flow and sensitivity fields for a NACA 0012 airfoil are used for fast evaluation of the nearby flow over an airfoil of different thickness (NACA 0015).

  10. A Self-Instructional Device for Conditioning Accurate Prosody.

    Science.gov (United States)

    Buiten, Roger; Lane, Harlan

    1965-01-01

    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  11. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  12. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  13. On the importance of having accurate data for astrophysical modelling

    Science.gov (United States)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  14. Accurate wind farm development and operation. Advanced wake modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  15. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    CERN Document Server

    Mead, Alexander; Heymans, Catherine; Joudaki, Shahab; Heavens, Alan

    2015-01-01

    We present an optimised variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically-motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of $\\Lambda$CDM and $w$CDM models the halo-model power is accurate to $\\simeq 5$ per cent for $k\\leq 10h\\,\\mathrm{Mpc}^{-1}$ and $z\\leq 2$. We compare our results with recent revisions of the popular HALOFIT model and show that our predictions are more accurate. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limi...

  16. Predicting reaerosolization

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, William Brent [Los Alamos National Laboratory; Omberg, Kristin M [Los Alamos National Laboratory

    2010-11-29

    Outdoor studies of the environmental persistence of bacteria have led to many interesting results. It turns out that the initial deposition of bacteria is not the end of the story. We examined both the ongoing daily deposition and aerosolization of bacteria for two weeks following an initial deposition event. Differences between samples collected in a clearing and those collected beneath a forest canopy were also examined. There were two important results: first, bacteria were still moving about in significant quantities after two weeks, though the local environment where they were most prevalent appeared to shift over time; second, we were able to develop a simple mathematical model that could fairly accurately estimate the average daily airborne concentration of bacteria over the duration of the experiment using readily available environmental information. The implication is that deposition patterns are very likely to shift over an extended period of time following a release, possibly quite significantly, but there is hope that we may be able to estimate these changes fairly accurately.

  17. Validation of a method to predict hammer speed from cable force

    Directory of Open Access Journals (Sweden)

    Sara M. Brice

    2015-09-01

    Conclusion: This study successfully derived and validated a method that allows prediction of linear hammer speed from directly measured cable force data. Two linear regression models were developed and it was found that either model would be capable of predicting accurate speeds. However, data predicted using the shifted regression model were more accurate.

  18. Thermochemistry and Charge Delocalization in Cyclization Reactions Using Accurate Quantum Monte Carlo Calculations

    Science.gov (United States)

    Saritas, Kayahan; Grossman, Jeffrey C.

    2015-03-01

    Molecules that undergo pericyclic isomerization reactions find interesting optical and energy storage applications, because of their usually high quantum yields, large spectral shifts and small structural changes upon light absorption. These reactions induce a drastic change in the conjugated structure such that substituents that become a part of the conjugated system upon isomerization can play an important role in determining properties such as enthalpy of isomerization and HOMO-LUMO gap. Therefore, theoretical investigations dealing with such systems should be capable of accurately capturing the interplay between electron correlation and exchange effects. In this work, we examine the dihydroazulene isomerization as an example conjugated system. We employ the highly accurate quantum Monte Carlo (QMC) method to predict thermochemical properties and to benchmark results from density functional theory (DFT) methods. Although DFT provides sufficient accuracy for similar systems, in this particular system, DFT predictions of ground state and reaction paths are inconsistent and non-systematic errors arise. We present a comparison between QMC and DFT results for enthalpy of isomerization, HOMO-LUMO gap and charge densities with a range of DFT functionals.

  19. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  20. Accurate Cure Modeling for Isothermal Processing of Fast Curing Epoxy Resins

    Directory of Open Access Journals (Sweden)

    Alexander Bernath

    2016-11-01

    Full Text Available In this work a holistic approach for the characterization and mathematical modeling of the reaction kinetics of a fast epoxy resin is shown. Major composite manufacturing processes like resin transfer molding involve isothermal curing at temperatures far below the ultimate glass transition temperature. Hence, premature vitrification occurs during curing and consequently has to be taken into account by the kinetic model. In order to show the benefit of using a complex kinetic model, the Kamal-Malkin kinetic model is compared to the Grindling kinetic model in terms of prediction quality for isothermal processing. From the selected models, only the Grindling kinetic is capable of taking into account vitrification. Non-isothermal, isothermal and combined differential scanning calorimetry (DSC measurements are conducted and processed for subsequent use for model parametrization. In order to demonstrate which DSC measurements are vital for proper cure modeling, both models are fitted to varying sets of measurements. Special attention is given to the evaluation of isothermal DSC measurements which are subject to deviations arising from unrecorded cross-linking prior to the beginning of the measurement as well as from physical aging effects. It is found that isothermal measurements are vital for accurate modeling of isothermal cure and cannot be neglected. Accurate cure predictions are achieved using the Grindling kinetic model.

  1. Prediction Markets

    DEFF Research Database (Denmark)

    Horn, Christian Franz; Ivens, Bjørn Sven; Ohneberg, Michael

    2014-01-01

    In recent years, Prediction Markets gained growing interest as a forecasting tool among researchers as well as practitioners, which resulted in an increasing number of publications. In order to track the latest development of research, comprising the extent and focus of research, this article...

  2. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  3. Optimization of extracranial stereotactic radiation therapy of small lung lesions using accurate dose calculation algorithms

    Directory of Open Access Journals (Sweden)

    Polednik Martin

    2006-11-01

    Full Text Available Abstract Background The aim of this study was to compare and to validate different dose calculation algorithms for the use in radiation therapy of small lung lesions and to optimize the treatment planning using accurate dose calculation algorithms. Methods A 9-field conformal treatment plan was generated on an inhomogeneous phantom with lung mimics and a soft tissue equivalent insert, mimicking a lung tumor. The dose distribution was calculated with the Pencil Beam and Collapsed Cone algorithms implemented in Masterplan (Nucletron and the Monte Carlo system XVMC and validated using Gafchromic EBT films. Differences in dose distribution were evaluated. The plans were then optimized by adding segments to the outer shell of the target in order to increase the dose near the interface to the lung. Results The Pencil Beam algorithm overestimated the dose by up to 15% compared to the measurements. Collapsed Cone and Monte Carlo predicted the dose more accurately with a maximum difference of -8% and -3% respectively compared to the film. Plan optimization by adding small segments to the peripheral parts of the target, creating a 2-step fluence modulation, allowed to increase target coverage and homogeneity as compared to the uncorrected 9 field plan. Conclusion The use of forward 2-step fluence modulation in radiotherapy of small lung lesions allows the improvement of tumor coverage and dose homogeneity as compared to non-modulated treatment plans and may thus help to increase the local tumor control probability. While the Collapsed Cone algorithm is closer to measurements than the Pencil Beam algorithm, both algorithms are limited at tissue/lung interfaces, leaving Monte-Carlo the most accurate algorithm for dose prediction.

  4. Accurate refinement of docked protein complexes using evolutionary information and deep learning.

    Science.gov (United States)

    Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit

    2016-06-01

    One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures.

  5. Accurate stochastic reconstruction of heterogeneous microstructures by limited x-ray tomographic projections.

    Science.gov (United States)

    Li, Hechao; Kaira, Shashank; Mertens, James; Chawla, Nikhilesh; Jiao, Yang

    2016-12-01

    An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for its performance prediction, prognosis and optimization. X-ray tomography has provided a nondestructive means for microstructure characterization in 3D and 4D (i.e. structural evolution over time), in which a material is typically reconstructed from a large number of tomographic projections using filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART). Here, we present in detail a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of absorption contrast x-ray tomographic projections. This discrete tomography reconstruction procedure is in contrast to the commonly used FBP and ART, which usually requires thousands of projections for accurate microstructure rendition. The utility of our stochastic procedure is first demonstrated by reconstructing a wide class of two-phase heterogeneous materials including sandstone and hard-particle packing from simulated limited-angle projections in both cone-beam and parallel beam projection geometry. It is then applied to reconstruct tailored Sn-sphere-clay-matrix systems from limited-angle cone-beam data obtained via a lab-scale tomography facility at Arizona State University and parallel-beam synchrotron data obtained at Advanced Photon Source, Argonne National Laboratory. In addition, we examine the information content of tomography data by successively incorporating larger number of projections and quantifying the accuracy of the reconstructions. We show that only a small number of projections (e.g. 20-40, depending on the complexity of the microstructure of interest and desired resolution) are necessary for accurate material reconstructions via our stochastic procedure, which indicates its high efficiency in using limited structural information. The ramifications of the stochastic reconstruction procedure in 4D materials science are also

  6. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  7. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  8. Accurate speed and slip measurement of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Ho, S.Y.S.; Langman, R. [Tasmania Univ., Hobart, TAS (Australia)

    1996-03-01

    Two alternative hardware circuits, for the accurate measurement of low slip in cage induction motors, are discussed. Both circuits compare the periods of the fundamental of the supply frequency and pulses from a shaft-connected toothed-wheel. The better of the two achieves accuracy to 0.5 percent of slip over the range 0.1 to 0.005, or better than 0.001 percent of speed over the range. This method is considered useful for slip measurement of motors supplied by either constant frequency mains of variable speed controllers with PMW waveforms. It is demonstrated that accurate slip measurement supports the conclusions of work previously done on the detection of broken rotor bars. (author). 1 tab., 6 figs., 13 refs.

  9. Accurate analysis of arbitrarily-shaped helical groove waveguide

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Tao; Wei Yan-Yu; Gong Yu-Bin; Yue Ling-Na; Wang Wen-Xiang

    2006-01-01

    This paper presents a theory on accurately analysing the dispersion relation and the interaction impedance of electromagnetic waves propagating through a helical groove waveguide with arbitrary groove shape, in which the complex groove profile is synthesized by a series of rectangular steps. By introducing the influence of high-order evanescent modes on the connection of any two neighbouring steps by an equivalent susceptance under a modified admittance matching condition, the assumption of the neglecting discontinuity capacitance in previously published analysis is avoided, and the accurate dispersion equation is obtained by means of a combination of field-matching method and admittancematching technique. The validity of this theory is proved by comparison between the measurements and the numerical calculations for two kinds of helical groove waveguides with different groove shapes.

  10. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  11. Accurate measurement of the helical twisting power of chiral dopants

    Science.gov (United States)

    Kosa, Tamas; Bodnar, Volodymyr; Taheri, Bahman; Palffy-Muhoray, Peter

    2002-03-01

    We propose a method for the accurate determination of the helical twisting power (HTP) of chiral dopants. In the usual Cano-wedge method, the wedge angle is determined from the far-field separation of laser beams reflected from the windows of the test cell. Here we propose to use an optical fiber based spectrometer to accurately measure the cell thickness. Knowing the cell thickness at the positions of the disclination lines allows determination of the HTP. We show that this extension of the Cano-wedge method greatly increases the accuracy with which the HTP is determined. We show the usefulness of this method by determining the HTP of ZLI811 in a variety of hosts with negative dielectric anisotropy.

  12. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  13. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  14. Symmetric Uniformly Accurate Gauss-Runge-Kutta Method

    Directory of Open Access Journals (Sweden)

    Dauda G. YAKUBU

    2007-08-01

    Full Text Available Symmetric methods are particularly attractive for solving stiff ordinary differential equations. In this paper by the selection of Gauss-points for both interpolation and collocation, we derive high order symmetric single-step Gauss-Runge-Kutta collocation method for accurate solution of ordinary differential equations. The resulting symmetric method with continuous coefficients is evaluated for the proposed block method for accurate solution of ordinary differential equations. More interestingly, the block method is self-starting with adequate absolute stability interval that is capable of producing simultaneously dense approximation to the solution of ordinary differential equations at a block of points. The use of this method leads to a maximal gain in efficiency as well as in minimal function evaluation per step.

  15. Library preparation for highly accurate population sequencing of RNA viruses

    Science.gov (United States)

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  16. FIXED-WING MICRO AERIAL VEHICLE FOR ACCURATE CORRIDOR MAPPING

    Directory of Open Access Journals (Sweden)

    M. Rehak

    2015-08-01

    Full Text Available In this study we present a Micro Aerial Vehicle (MAV equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  17. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions, co......, conical scanners and push-broom antennas are compared. The comparison will cover reflector optics and focal plane array configuration....

  18. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Science.gov (United States)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  19. The highly accurate anteriolateral portal for injecting the knee

    Directory of Open Access Journals (Sweden)

    Chavez-Chiang Colbert E

    2011-03-01

    Full Text Available Abstract Background The extended knee lateral midpatellar portal for intraarticular injection of the knee is accurate but is not practical for all patients. We hypothesized that a modified anteriolateral portal where the synovial membrane of the medial femoral condyle is the target would be highly accurate and effective for intraarticular injection of the knee. Methods 83 subjects with non-effusive osteoarthritis of the knee were randomized to intraarticular injection using the modified anteriolateral bent knee versus the standard lateral midpatellar portal. After hydrodissection of the synovial membrane with lidocaine using a mechanical syringe (reciprocating procedure device, 80 mg of triamcinolone acetonide were injected into the knee with a 2.0-in (5.1-cm 21-gauge needle. Baseline pain, procedural pain, and pain at outcome (2 weeks and 6 months were determined with the 10 cm Visual Analogue Pain Score (VAS. The accuracy of needle placement was determined by sonographic imaging. Results The lateral midpatellar and anteriolateral portals resulted in equivalent clinical outcomes including procedural pain (VAS midpatellar: 4.6 ± 3.1 cm; anteriolateral: 4.8 ± 3.2 cm; p = 0.77, pain at outcome (VAS midpatellar: 2.6 ± 2.8 cm; anteriolateral: 1.7 ± 2.3 cm; p = 0.11, responders (midpatellar: 45%; anteriolateral: 56%; p = 0.33, duration of therapeutic effect (midpatellar: 3.9 ± 2.4 months; anteriolateral: 4.1 ± 2.2 months; p = 0.69, and time to next procedure (midpatellar: 7.3 ± 3.3 months; anteriolateral: 7.7 ± 3.7 months; p = 0.71. The anteriolateral portal was 97% accurate by real-time ultrasound imaging. Conclusion The modified anteriolateral bent knee portal is an effective, accurate, and equivalent alternative to the standard lateral midpatellar portal for intraarticular injection of the knee. Trial Registration ClinicalTrials.gov: NCT00651625

  20. Ultra accurate collaborative information filtering via directed user similarity

    OpenAIRE

    Guo, Qiang; Song, Wen-Jun; Liu, Jian-Guo

    2014-01-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influen...

  1. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  2. Accurate Method for Determining Adhesion of Cantilever Beams

    Energy Technology Data Exchange (ETDEWEB)

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  3. Accurate and Simple Calibration of DLP Projector Systems

    OpenAIRE

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry pr...

  4. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry

    OpenAIRE

    Fuchs, Franz G.; Hjelmervik, Jon M.

    2014-01-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...

  5. A highly accurate method to solve Fisher’s equation

    Indian Academy of Sciences (India)

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  6. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  7. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    Science.gov (United States)

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  8. Accurate and simple calibration of DLP projector systems

    Science.gov (United States)

    Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus

    2014-03-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.

  9. An accurate metric for the spacetime around neutron stars

    CERN Document Server

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical propert...

  10. [Spectroscopy technique and ruminant methane emissions accurate inspecting].

    Science.gov (United States)

    Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun

    2009-03-01

    The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.

  11. Accurate Sliding-Mode Control System Modeling for Buck Converters

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.

    2007-01-01

    This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....

  12. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  13. Final Technical Report: Increasing Prediction Accuracy.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  14. Can crop-climate models be accurate and precise? A case study for wheat production in Denmark

    DEFF Research Database (Denmark)

    Montesino San Martin, Manuel; Olesen, Jørgen E.; Porter, John Roy

    2015-01-01

    and mechanistic wheat models to assess how differences in the extent of process understanding in models affects uncertainties in projected impact. Predictive power of the models was tested via both accuracy (bias) and precision (or tightness of grouping) of yield projections for extrapolated weather conditions....... Yields predicted by the mechanistic model were generally more accurate than the empirical models for extrapolated conditions. This trend does not hold for all extrapolations; mechanistic and empirical models responded differently due to their sensitivities to distinct weather features. However, higher...

  15. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  16. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    Science.gov (United States)

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  17. Accurate Gas Phase Formation Enthalpies of Alloys and Refractories Decomposition Products

    KAUST Repository

    Minenkov, Yury

    2017-01-17

    Accurate gas phase formation enthalpies, ΔHf, of metal oxides and halides are critical for the prediction of the stability of high temperature materials used in the aerospace and nuclear industries. Unfortunately, the experimental ΔHf values of these compounds in the most used databases, such as the NIST-JANAF database, are often reported with large inaccuracy, while some other ΔHf values clearly differ from the value predicted by CCSD(T) methods. To address this point, in this work we systematically predicted the ΔHf values of a series of these compounds having a group 4, 6, or 14 metal. The ΔHf values in question were derived within a composite Feller-Dixon-Peterson (FDP) scheme based protocol that combines the DLPNO-CCSD(T) enthalpy of ad hoc designed reactions and the experimental ΔHf values of few reference complexes. In agreement with other theoretical studies, we predict the ΔHf values for TiOCl2, TiOF2, GeF2, and SnF4 to be significantly different from the values tabulated in NIST-JANAF and other sources, which suggests that the tabulated experimental values are inaccurate. Similarly, the predicted ΔHf values for HfCl2, HfBr2, HfI2, MoOF4, MoCl6, WOF4, WOCl4, GeO2, SnO2, PbBr4, PbI4, and PbO2 also clearly differ from the tabulated experimental values, again suggesting large inaccuracy in the experimental values. In the case when largely different experimental values are available, we point to the value that is in better agreement with our results. We expect the ΔHf values reported in this work to be quite accurate, and thus, they might be used in thermodynamic calculations, because the effects from core correlation, relativistic effects, and basis set incompleteness were included in the DLPNO-CCSD(T) calculations. T1 and T2 values were thoroughly monitored as indicators of the quality of the reference Hartree-Fock orbitals (T1) and potential multireference character of the systems (T2).

  18. Accurate Programming: Thinking about programs in terms of properties

    Directory of Open Access Journals (Sweden)

    Walid Taha

    2011-09-01

    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  19. Accurate emulators for large-scale computer experiments

    CERN Document Server

    Haaland, Ben; 10.1214/11-AOS929

    2012-01-01

    Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.

  20. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik

    2011-01-01

    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  1. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Science.gov (United States)

    Robinson, David

    2014-12-09

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.

  2. Accurate strand-specific quantification of viral RNA.

    Directory of Open Access Journals (Sweden)

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  3. Accurate measurement of ultrasonic velocity by eliminating the diffraction effect

    Institute of Scientific and Technical Information of China (English)

    WEI Tingcun

    2003-01-01

    The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.

  4. A novel simple and accurate flatness measurement method

    CERN Document Server

    Thang, H L

    2011-01-01

    Flatness measurement of a surface plate is an intensive and old research topic. However ISO definition related and other measurement methods seem uneasy in measuring and/ or complicated in data analysis. Especially in reality, the mentioned methods don't take a clear and straightforward care on the inclining angle which is always included in any given flatness measurement. In this report a novel simple and accurate flatness measurement method was introduced to overcome this prevailing feature in the available methods. The mathematical modeling for this method was also presented making the underlying nature of the method transparent. The applying examples show consistent results.

  5. Fourth order accurate compact scheme with group velocity control (GVC)

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    For solving complex flow field with multi-scale structure higher order accurate schemes are preferred. Among high order schemes the compact schemes have higher resolving efficiency. When the compact and upwind compact schemes are used to solve aerodynamic problems there are numerical oscillations near the shocks. The reason of oscillation production is because of non-uniform group velocity of wave packets in numerical solutions. For improvement of resolution of the shock a parameter function is introduced in compact scheme to control the group velocity. The newly developed method is simple. It has higher accuracy and less stencil of grid points.

  6. Rapid and Accurate Idea Transfer: Presenting Ideas with Concept Maps

    Science.gov (United States)

    2008-07-30

    questions from the Pre-test were included in the Post-test. I. At the end of the day, people in the camps take home about __ taka to feed their families. 2...teachers are not paid regularly. Tuition fees for different classes are 40 to 80 taka (Bangladesh currency) 27 per month which is very high for the...October 26, 2002. 27 In April 2005, exchange rate, one $ = 60 taka . 44 Rapid and Accurate Idea Transfer: CDRL (DI-MIS-807 11 A, 00012 1) Presenting

  7. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  8. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  9. Assessing smoking status in disadvantaged populations: is computer administered self report an accurate and acceptable measure?

    Directory of Open Access Journals (Sweden)

    Bryant Jamie

    2011-11-01

    Full Text Available Abstract Background Self report of smoking status is potentially unreliable in certain situations and in high-risk populations. This study aimed to determine the accuracy and acceptability of computer administered self-report of smoking status among a low socioeconomic (SES population. Methods Clients attending a community service organisation for welfare support were invited to complete a cross-sectional touch screen computer health survey. Following survey completion, participants were invited to provide a breath sample to measure exposure to tobacco smoke in expired air. Sensitivity, specificity, positive predictive value and negative predictive value were calculated. Results Three hundred and eighty three participants completed the health survey, and 330 (86% provided a breath sample. Of participants included in the validation analysis, 59% reported being a daily or occasional smoker. Sensitivity was 94.4% and specificity 92.8%. The positive and negative predictive values were 94.9% and 92.0% respectively. The majority of participants reported that the touch screen survey was both enjoyable (79% and easy (88% to complete. Conclusions Computer administered self report is both acceptable and accurate as a method of assessing smoking status among low SES smokers in a community setting. Routine collection of health information using touch-screen computer has the potential to identify smokers and increase provision of support and referral in the community setting.

  10. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    Science.gov (United States)

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  11. Accurate Runout Measurement for HDD Spinning Motors and Disks

    Science.gov (United States)

    Jiang, Quan; Bi, Chao; Lin, Song

    As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.

  12. Simple and accurate optical height sensor for wafer inspection systems

    Science.gov (United States)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  13. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  14. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  15. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  16. Towards an accurate determination of the age of the Universe

    CERN Document Server

    Jiménez, R

    1998-01-01

    In the past 40 years a considerable effort has been focused in determining the age of the Universe at zero redshift using several stellar clocks. In this review I will describe the best theoretical methods to determine the age of the oldest Galactic Globular Clusters (GC). I will also argue that a more accurate age determination may come from passively evolving high-redshift ellipticals. In particular, I will review two new methods to determine the age of GC. These two methods are more accurate than the classical isochrone fitting technique. The first method is based on the morphology of the horizontal branch and is independent of the distance modulus of the globular cluster. The second method uses a careful binning of the stellar luminosity function which determines simultaneously the distance and age of the GC. It is found that the oldest GCs have an age of $13.5 \\pm 2$ Gyr. The absolute minimum age for the oldest GCs is 10.5 Gyr and the maximum is 16.0 Gyr (with 99% confidence). Therefore, an Einstein-De S...

  17. Learning fast accurate movements requires intact frontostriatal circuits

    Directory of Open Access Journals (Sweden)

    Britne eShabbott

    2013-11-01

    Full Text Available The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound, and difficulties in distinguishing learning deficits from execution impairments (performance confound. We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements, and we addressed the definition and performance confounds by: 1 focusing on an operationally defined core element of motor skill learning (speed-accuracy learning, and 2 using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington’s disease (HD, a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning.

  18. Accurate measurement of streamwise vortices using dual-plane PIV

    Science.gov (United States)

    Waldman, Rye M.; Breuer, Kenneth S.

    2012-11-01

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.

  19. Accurate and precise zinc isotope ratio measurements in urban aerosols.

    Science.gov (United States)

    Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly

    2008-12-15

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).

  20. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  1. Novel dispersion tolerant interferometry method for accurate measurements of displacement

    Science.gov (United States)

    Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.

    2015-05-01

    We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.

  2. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  3. Genetic fuzzy system predicting contractile reactivity patterns of small arteries

    DEFF Research Database (Denmark)

    Tang, J; Sheykhzade, Majid; Clausen, B F;

    2014-01-01

    strategies. Results show that optimized fuzzy systems (OFSs) predict contractile reactivity of arteries accurately. In addition, OFSs identified significant differences that were undetectable using conventional analysis in the responses of arteries between groups. We concluded that OFSs may be used...

  4. Accurate simulation of Raman amplified lightwave synthesized frequency sweeper

    DEFF Research Database (Denmark)

    Pedersen, Anders Tegtmeier; Olesen, Anders Sig; Rottwitt, Karsten

    2011-01-01

    A lightwave synthesized frequency sweeper using a Raman amplifier for loss compensation is presented together with a numerical model capable of predicting the shape of individual pulses as well as the overall envelope of more than 100 pulses. The generated pulse envelope consists of 116 pulses...

  5. Internal Medicine Residents Do Not Accurately Assess Their Medical Knowledge

    Science.gov (United States)

    Jones, Roger; Panda, Mukta; Desbiens, Norman

    2008-01-01

    Background: Medical knowledge is essential for appropriate patient care; however, the accuracy of internal medicine (IM) residents' assessment of their medical knowledge is unknown. Methods: IM residents predicted their overall percentile performance 1 week (on average) before and after taking the in-training exam (ITE), an objective and well…

  6. Predicting and Supplying Human Resource Requirements for the Future.

    Science.gov (United States)

    Blake, Larry J.

    After asserting that public institutions should not provide training for nonexistent jobs, this paper reviews problems associated with the accurate prediction of future manpower needs. The paper reviews the processes currently used to project labor force needs and notes the difficulty of accurately forecasting labor market "surprises,"…

  7. Review of prediction for thermal contact resistance

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Theoretical prediction research on thermal contact resistance is reviewed in this paper. In general, modeling or simulating the thermal contact resistance involves several aspects, including the descriptions of surface topography, the analysis of micro mechanical deformation, and the thermal models. Some key problems are proposed for accurately predicting the thermal resistance of two solid contact surfaces. We provide a perspective on further promising research, which would be beneficial to understanding mechanisms and engineering applications of the thermal contact resistance in heat transport phenomena.

  8. Lumbar spine: pretest predictability of CT findings

    Energy Technology Data Exchange (ETDEWEB)

    Giles, D.J.; Thomas, R.J.; Osborn, A.G.; Clayton, P.D.; Miller, M.H.; Bahr, A.L.; Frederick, P.R.; O' Connor, G.D.; Ostler, D.

    1984-03-01

    Demographic and symptomatic data gathered from 460 patients referred for lumbosacral CT examinations were analyzed to determine if the prescan probability of normal or abnormal findings could be predicted accurately. The authors were unable to predict the presence of herniated disk on the basis of patient-supplied data alone. Age was the single most significant predictor of an abnormality and was sharply related to degenerative disease and spinal stenosis.

  9. Wakeful rest promotes the integration of spatial memories into accurate cognitive maps.

    Science.gov (United States)

    Craig, Michael; Dewar, Michaela; Harris, Mathew A; Della Sala, Sergio; Wolbers, Thomas

    2016-02-01

    Flexible spatial navigation, e.g. the ability to take novel shortcuts, is contingent upon accurate mental representations of environments-cognitive maps. These cognitive maps critically depend on hippocampal place cells. In rodents, place cells replay recently travelled routes, especially during periods of behavioural inactivity (sleep/wakeful rest). This neural replay is hypothesised to promote not only the consolidation of specific experiences, but also their wider integration, e.g. into accurate cognitive maps. In humans, rest promotes the consolidation of specific experiences, but the effect of rest on the wider integration of memories remained unknown. In the present study, we examined the hypothesis that cognitive map formation is supported by rest-related integration of new spatial memories. We predicted that if wakeful rest supports cognitive map formation, then rest should enhance knowledge of overarching spatial relations that were never experienced directly during recent navigation. Forty young participants learned a route through a virtual environment before either resting wakefully or engaging in an unrelated perceptual task for 10 min. Participants in the wakeful rest condition performed more accurately in a delayed cognitive map test, requiring the pointing to landmarks from a range of locations. Importantly, the benefit of rest could not be explained by active rehearsal, but can be attributed to the promotion of consolidation-related activity. These findings (i) resonate with the demonstration of hippocampal replay in rodents, and (ii) provide the first evidence that wakeful rest can improve the integration of new spatial memories in humans, a function that has, hitherto, been associated with sleep.

  10. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    Directory of Open Access Journals (Sweden)

    Usman Khan

    2014-04-01

    Full Text Available Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h. The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication.

  11. Accurate spectroscopic characterization of protonated oxirane: a potential prebiotic species in Titan's atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Ali, Ashraf [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Biczysko, Malgorzata; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2014-09-10

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm{sup –1} for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  12. Accurate continuous geographic assignment from low- to high-density SNP data

    DEFF Research Database (Denmark)

    Guillot, Gilles; Jónsson, Hákon; Hinge, Antoine;

    2016-01-01

    Motivation : Large-scale genotype datasets can help tracking the dispersal patterns of epidemiological outbreaks and predicting the geographic origins of individuals. This shows direct applications in forensics for profiling both victims and criminals, and in wildlife management, where poaching h......,146 SNPs. Our method appears to be best tailored for the analysis of medium-size datasets (a few tens of thousands of loci), such as reduced-representation sequencing data that become increasingly available in ecology....... hotspot areas can be located. Such approaches, however, require fast and accurate geographical assignment methods. Results : We introduce a novel statistical method for geopositioning individuals of unknown origin from genotypes. Our method is based on a geostatistical model trained with a dataset...... of georeferenced genotypes. Statistical inference under this model can be implemented within the theoretical framework of Integrated Nested Laplace Approximation (INLA), which represents one of the major recent breakthroughs in statistics, devoid of Monte Carlo simulations. We compare the performance of our method...

  13. Accurate Modeling of Organic Molecular Crystals by Dispersion-Corrected Density Functional Tight Binding (DFTB).

    Science.gov (United States)

    Brandenburg, Jan Gerit; Grimme, Stefan

    2014-06-05

    The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.

  14. Accurate evaluation of lowest band gaps in ternary locally resonant phononic crystals

    Institute of Scientific and Technical Information of China (English)

    Wang Gang; Shao Li-Hui; Liu Yao-Zong; Wen Ji-Hong

    2006-01-01

    Based on a better understanding of the lattice vibration modes, two simple spring-mass models are constructed in order to evaluate the frequencies on both the lower and upper edges of the lowest locally resonant band gaps of the ternary locally resonant phononic crystals. The parameters of the models are given in a reasonable way based on the physical insight into the band gap mechanism. Both the lumped-mass methods and our models are used in the study of the influences of structural and the material parameters on frequencies on both edges of the lowest gaps in the ternary locally resonant phononic crystals. The analytical evaluations with our models and the theoretical predictions with the lumped-mass method are in good agreement with each other. The newly proposed heuristic models are helpful for a better understanding of the locally resonant band gap mechanism, as well as more accurate evaluation of the band edge frequencies.

  15. Accurate Calculations of Rotationally Inelastic Scattering Cross Sections Using Mixed Quantum/Classical Theory.

    Science.gov (United States)

    Semenov, Alexander; Babikov, Dmitri

    2014-01-16

    For computational treatment of rotationally inelastic scattering of molecules, we propose to use the mixed quantum/classical theory, MQCT. The old idea of treating translational motion classically, while quantum mechanics is used for rotational degrees of freedom, is developed to the new level and is applied to Na + N2 collisions in a broad range of energies. Comparison with full-quantum calculations shows that MQCT accurately reproduces all, even minor, features of energy dependence of cross sections, except scattering resonances at very low energies. The remarkable success of MQCT opens up wide opportunities for computational predictions of inelastic scattering cross sections at higher temperatures and/or for polyatomic molecules and heavier quenchers, which is computationally close to impossible within the full-quantum framework.

  16. Intramolecular hydrogen migration in alkylperoxy and hydroperoxyalkylperoxy radicals: accurate treatment of hindered rotors.

    Science.gov (United States)

    Sharma, Sandeep; Raman, Sumathy; Green, William H

    2010-05-13

    We have calculated the thermochemistry and rate coefficients for stable molecules and reactions in the title reaction families using CBS-QB3 and B3LYP/CBSB7 methods. The accurate treatment of hindered rotors for molecules having multiple internal rotors with potentials that are not independent of each other can be problematic, and a simplified scheme is suggested to treat them. This is particularly important for hydroperoxyalkylperoxy radicals (HOOQOO). Two new thermochemical group values are suggested in this paper, and with these values, the group additivity method for calculation of enthalpy as implemented in reaction mechanism generator (RMG) gives good agreement with CBS-QB3 predictions. The barrier heights follow the Evans-Polanyi relationship for each type of intramolecular hydrogen migration reaction studied.

  17. A simplified but accurate prevision method for along the sun PV pumping systems

    Energy Technology Data Exchange (ETDEWEB)

    Martire, Thierry [Laboratoire d' Electrotechnique de Montpellier (LEM), Universite de Montpellier II, CC079, Place Eugene Bataillon, F 34095 Montpellier Cedex 5 (France); Apex BP-SOLAR - 1, Rue du Grand Chene, F 34270 Saint Mathieu de Treviers (France); Glaize, Christian; Joubert, Charles [Laboratoire d' Electrotechnique de Montpellier (LEM), Universite de Montpellier II, CC079, Place Eugene Bataillon, F 34095 Montpellier Cedex 5 (France); Rouviere, Benoit [Apex BP-SOLAR - 1, Rue du Grand Chene, F 34270 Saint Mathieu de Treviers (France)

    2008-11-15

    Supplying water in desert areas or isolated sites is the perfect job for along sun pumping systems. However, due to their relatively high cost, the sizing of these systems implies the use of an accurate tool. This paper presents a method that could be used for the sizing of an along the sun pumping system based on energetic considerations, simple modeling and fitting methods. The model described allows to predict the flow rate hour per hour for a given daily solar irradiance profile. It uses a three-phase induction machine, a voltage inverter and a centrifugal pump. Some experimental results are also confronted with the calculated ones in order to show the accuracy of the model. The accuracy of the model is shown to be good and essentially depends on the precision used to model each element in the overall system. (author)

  18. Accurate continuous geographic assignment from low- to high-density SNP data

    DEFF Research Database (Denmark)

    Guillot, Gilles; Jónsson, Hákon; Hinge, Antoine;

    2016-01-01

    hotspot areas can be located. Such approaches, however, require fast and accurate geographical assignment methods. Results : We introduce a novel statistical method for geopositioning individuals of unknown origin from genotypes. Our method is based on a geostatistical model trained with a dataset......Motivation : Large-scale genotype datasets can help tracking the dispersal patterns of epidemiological outbreaks and predicting the geographic origins of individuals. This shows direct applications in forensics for profiling both victims and criminals, and in wildlife management, where poaching...... and SPA in a simulation framework. We highlight the accuracy and limits of continuous spatial assignment methods at various scales by analyzing genotype datasets from a diversity of species, including Florida scrub jay birds Aphelocoma coerulescens, Arabidopsis thaliana and humans, representing 41 to 197...

  19. A stochastic model of kinetochore-microtubule attachment accurately describes fission yeast chromosome segregation.

    Science.gov (United States)

    Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick

    2012-03-19

    In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B-like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B-like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy.

  20. The goal of forming accurate impressions during social interactions: attenuating the impact of negative expectancies.

    Science.gov (United States)

    Neuberg, S L

    1989-03-01

    Investigated the idea that impression formation goals may regulate the impact that perceiver expectancies have on social interactions. In simulated interviews, interviewers Ss were given a negative expectancy about one applicant S and no expectancy about another. Half the interviewers were encouraged to form accurate impressions; the others were not. As predicted, no-goal interviewers exhibited a postinteraction impression bias against the negative-expectancy applicants, whereas the accuracy-goal interviewers did not. Moreover, the ability of the accuracy goal to reduce this bias was apparently mediated by more extensive and less biased interviewer information-gathering, which in turn elicited an improvement in negative-expectancy applicants' performances. These findings stress the theoretical and practical importance of considering the motivational context within which expectancy-tinged social interactions occur.

  1. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  2. Innovative Flow Cytometry Allows Accurate Identification of Rare Circulating Cells Involved in Endothelial Dysfunction

    Science.gov (United States)

    Boraldi, Federica; Bartolomeo, Angelica; De Biasi, Sara; Orlando, Stefania; Costa, Sonia; Cossarizza, Andrea; Quaglino, Daniela

    2016-01-01

    Introduction Although rare, circulating endothelial and progenitor cells could be considered as markers of endothelial damage and repair potential, possibly predicting the severity of cardiovascular manifestations. A number of studies highlighted the role of these cells in age-related diseases, including those characterized by ectopic calcification. Nevertheless, their use in clinical practice is still controversial, mainly due to difficulties in finding reproducible and accurate methods for their determination. Methods Circulating mature cells (CMC, CD45-, CD34+, CD133-) and circulating progenitor cells (CPC, CD45dim, CD34bright, CD133+) were investigated by polychromatic high-speed flow cytometry to detect the expression of endothelial (CD309+) or osteogenic (BAP+) differentiation markers in healthy subjects and in patients affected by peripheral vascular manifestations associated with ectopic calcification. Results This study shows that: 1) polychromatic flow cytometry represents a valuable tool to accurately identify rare cells; 2) the balance of CD309+ on CMC/CD309+ on CPC is altered in patients affected by peripheral vascular manifestations, suggesting the occurrence of vascular damage and low repair potential; 3) the increase of circulating cells exhibiting a shift towards an osteoblast-like phenotype (BAP+) is observed in the presence of ectopic calcification. Conclusion Differences between healthy subjects and patients with ectopic calcification indicate that this approach may be useful to better evaluate endothelial dysfunction in a clinical context. PMID:27560136

  3. Accurate determination of the free-free Gaunt factor; I - non-relativistic Gaunt factors

    CERN Document Server

    van Hoof, P A M; Volk, K; Chatzikos, M; Ferland, G J; Lykins, M; Porter, R L; Wang, Y

    2014-01-01

    Modern spectral synthesis codes need the thermally averaged free-free Gaunt factor defined over a very wide range of parameter space in order to produce an accurate prediction for the spectrum emitted by an ionized plasma. Until now no set of data exists that would meet this need in a fully satisfactory way. We have therefore undertaken to produce a table of very accurate non-relativistic Gaunt factors over a much wider range of parameters than has ever been produced before. We first produced a table of non-averaged Gaunt factors, covering the parameter space log10(epsilon_i) = -20 to +10 and log10(w) = -30 to +25. We then continued to produce a table of thermally averaged Gaunt factors covering the parameter space log10(gamma^2) = -6 to +10 and log10(u) = -16 to +13. Finally we produced a table of the frequency integrated Gaunt factor covering the parameter space log10(gamma^2) = -6 to +10. All the data presented in this paper are available online.

  4. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Directory of Open Access Journals (Sweden)

    Kevin R Ramkissoon

    Full Text Available The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  5. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Science.gov (United States)

    Ramkissoon, Kevin R; Miller, Jennifer K; Ojha, Sunil; Watson, Douglas S; Bomar, Martha G; Galande, Amit K; Shearer, Alexander G

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  6. Fast and Accurate Electronic Excitations in Cyanines with the Many-Body Bethe-Salpeter Approach.

    Science.gov (United States)

    Boulanger, Paul; Jacquemin, Denis; Duchemin, Ivan; Blase, Xavier

    2014-03-11

    The accurate prediction of the optical signatures of cyanine derivatives remains an important challenge in theoretical chemistry. Indeed, up to now, only the most expensive quantum chemical methods (CAS-PT2, CC, DMC, etc.) yield consistent and accurate data, impeding the applications on real-life molecules. Here, we investigate the lowest lying singlet excitation energies of increasingly long cyanine dyes within the GW and Bethe-Salpeter Green's function many-body perturbation theory. Our results are in remarkable agreement with available coupled-cluster (exCC3) data, bringing these two single-reference perturbation techniques within a 0.05 eV maximum discrepancy. By comparison, available TD-DFT calculations with various semilocal, global, or range-separated hybrid functionals, overshoot the transition energies by a typical error of 0.3-0.6 eV. The obtained accuracy is achieved with a parameter-free formalism that offers similar accuracy for metallic or insulating, finite size or extended systems.

  7. Prometheus: Scalable and Accurate Emulation of Task-Based Applications on Many-Core Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Kestor, Gokcen; Gioiosa, Roberto; Chavarría-Miranda, Daniel

    2015-03-01

    Modeling the performance of non-deterministic parallel applications on future many-core systems requires the development of novel simulation and emulation techniques and tools. We present “Prometheus”, a fast, accurate and modular emulation framework for task-based applications. By raising the level of abstraction and focusing on runtime synchronization, Prometheus can accurately predict applications’ performance on very large many-core systems. We validate our emulation framework against two real platforms (AMD Interlagos and Intel MIC) and report error rates generally below 4%. We, then, evaluate Prometheus’ performance and scalability: our results show that Prometheus can emulate a task-based application on a system with 512K cores in 11.5 hours. We present two test cases that show how Prometheus can be used to study the performance and behavior of systems that present some of the characteristics expected from exascale supercomputer nodes, such as active power management and processors with a high number of cores but reduced cache per core.

  8. Accurate potential energy, dipole moment curves, and lifetimes of vibrational states of heteronuclear alkali dimers

    CERN Document Server

    Fedorov, Dmitry A; Varganov, Sergey A

    2014-01-01

    We calculate the potential energy curves, the permanent dipole moment curves, and the lifetimes of the ground and excited vibrational states of the heteronuclear alkali dimers XY (X, Y = Li, Na, K, Rb, Cs) in the X1{\\Sigma}+ electronic state using the coupled cluster with singles doubles and triples (CCSDT) method. All-electron quadruple-{\\zeta} basis sets with additional core functions are used for Li and Na, and small-core relativistic effective core potentials with quadruple-{\\zeta} quality basis sets are used for K, Rb and Cs. The inclusion of the coupled cluster non-perturbative triple excitations is shown to be crucial for obtaining the accurate potential energy curves. Large one-electron basis set with additional core functions is needed for the accurate prediction of permanent dipole moments. The dissociation energies are overestimated by only 14 cm-1 for LiNa and by no more than 114 cm-1 for the other molecules. The discrepancies between the experimental and calculated harmonic vibrational frequencie...

  9. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    Science.gov (United States)

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  10. Accurate calculation of the K photoionization around the minimum near threshold

    Science.gov (United States)

    Theodosiou, Constantine

    2003-05-01

    The accurate prediction of the location of Cooper minima in the photoionization cross sections of alkali metal atoms have been used in the past as a refined test for theoretical calculations. The older measuerements of Hudson and Carter(JOSA 57, 1471 (1967)) for potassium were drastically improved near the minimum by Sandner et al. (Phys. Rev. A 23, 2732 (1981)). The latter work found good overall agreement with the most accurate calculations, but the observed minimum had an overall shift and is clearly narrower than the calculated one. We have revisited the theoretical treatment within the Coulomb approximation with a central potential core approach (CACP)(Phys. Rev. A 30, 2881 (1984)), treating carefully the relativistic effects. We find excellent agreement with the measurements of Sandner et al.. Our study indicates that the improvement stems from the separate treatment of the ɛ p_3/2 and ɛ p_1/2 partial photoionization cross sections, in addition to the inclusion of a realistic central potential to describe the ion core.

  11. Accurate fit and thermochemical analysis of the adsorption isotherms of methane in zeolite 13X

    Energy Technology Data Exchange (ETDEWEB)

    Llano-Restrepo, M. [University of Valle, Valle (Colombia). School of Chemical Engineering

    2010-07-01

    The recovery of methane from landfill gases and coal mines has become of the utmost importance because methane has a global-climate warming potential much higher than that of carbon dioxide. Zeolite 13X is widely used for the separation of methane and carbon dioxide from such sources. Accurate fits of the adsorption isotherms of methane in zeolite 13X are required for the rigorous simulation of that separation. In this work, a generalized statistical thermodynamic adsorption (GSTA) model has been used to obtain a very accurate correlation of a published set of adsorption isotherms of methane in zeolite 13X that were measured from 120 K to 273 K over the pressure range 0.07 Pa to 12.2 MPa. In contrast, none of five traditional adsorption models was capable of fitting these isotherms. A thermochemical interpretation of the correlation is provided and predictions for the isosteric heat of adsorption are made, which turn out to be in excellent agreement with the available experimental data.

  12. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional

    Science.gov (United States)

    Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L.; Perdew, John P.

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  13. Robust and accurate anomaly detection in ECG artifacts using time series motif discovery.

    Science.gov (United States)

    Sivaraks, Haemwaan; Ratanamahatana, Chotirat Ann

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods.

  14. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    Science.gov (United States)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  15. Urinary isoxanthohumol is a specific and accurate biomarker of beer consumption.

    Science.gov (United States)

    Quifer-Rada, Paola; Martínez-Huélamo, Miriam; Chiva-Blanch, Gemma; Jáuregui, Olga; Estruch, Ramon; Lamuela-Raventós, Rosa M

    2014-04-01

    Biomarkers of food consumption are a powerful tool to obtain more objective measurements of dietary exposure and to monitor compliance in clinical trials. In this study, we evaluated the effectiveness of urinary isoxanthohumol (IX) excretion as an accurate biomarker of beer consumption. A dose-response clinical trial, a randomized, crossover clinical trial, and a cohort study were performed. In the dose-response trial, 41 young volunteers (males and females, aged 28 ± 3 y) consumed different doses of beer at night and a spot urine sample was collected the following morning. In the clinical trial, 33 males with high cardiovascular risk (aged 61 ± 7 y) randomly were administered 30 g of ethanol/d as gin or beer, or an equivalent amount of polyphenols as nonalcoholic beer for 4 wk. Additionally, a subsample of 46 volunteers from the PREDIMED (Prevenciόn con Dieta Mediterránea) study (males and females, aged 63 ± 5 y) was also evaluated. Prenylflavonoids were quantified in urine samples by liquid chromatography coupled to mass spectrometry. IX urinary recovery increased linearly with the size of the beer dose in male volunteers. A significant increase in IX excretion (4.0 ± 1.6 μg/g creatinine) was found after consumption of beer and nonalcoholic beer for 4 wk (P beer consumers and abstainers with a sensitivity of 67% and specificity of 100% (positive predictive value = 70%, negative predictive value = 100% in real-life conditions). IX in urine samples was found to be a specific and accurate biomarker of beer consumption and may be a powerful tool in epidemiologic studies.

  16. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    Science.gov (United States)

    Nielsen, Jens; d'Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-01

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  17. Accurate complex scaling of three dimensional numerical potentials.

    Science.gov (United States)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schrödinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.

  18. GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity

    CERN Document Server

    Leonard, Adrienne; Starck, Jean-Luc

    2013-01-01

    We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one dimension aimed at applying compressive sensing theory to the inversion of gravitational lensing measurements to recover 3D density maps. Using the assumption that the density can be represented sparsely in our chosen basis - 2D transverse wavelets and 1D line of sight dirac functions - we show that clusters of galaxies can be identified and accurately localised and characterised using this method. Throughout, we use simulated data consistent with the quality currently attainable in large surveys. We present a thorough statistical analysis of the errors and biases in both the redshifts of detected structures and their amplitudes. The GLIMPSE method is able to produce reconstructions at significantly higher resolution than the input data; in this paper we show reco...

  19. Methods for Accurate Free Flight Measurement of Drag Coefficients

    CERN Document Server

    Courtney, Elya; Courtney, Michael

    2015-01-01

    This paper describes experimental methods for free flight measurement of drag coefficients to an accuracy of approximately 1%. There are two main methods of determining free flight drag coefficients, or equivalent ballistic coefficients: 1) measuring near and far velocities over a known distance and 2) measuring a near velocity and time of flight over a known distance. Atmospheric conditions must also be known and nearly constant over the flight path. A number of tradeoffs are important when designing experiments to accurately determine drag coefficients. The flight distance must be large enough so that the projectile's loss of velocity is significant compared with its initial velocity and much larger than the uncertainty in the near and/or far velocity measurements. On the other hand, since drag coefficients and ballistic coefficients both depend on velocity, the change in velocity over the flight path should be small enough that the average drag coefficient over the path (which is what is really determined)...

  20. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  1. Accurate Parallel Algorithm for Adini Nonconforming Finite Element

    Institute of Scientific and Technical Information of China (English)

    罗平; 周爱辉

    2003-01-01

    Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.

  2. Accurate Performance Analysis of Opportunistic Decode-and-Forward Relaying

    CERN Document Server

    Tourki, Kamel; Alouni, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures.

  3. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    CERN Document Server

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  4. Accurate derivative evaluation for any Grad–Shafranov solver

    Energy Technology Data Exchange (ETDEWEB)

    Ricketson, L.F. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Cerfon, A.J., E-mail: cerfon@cims.nyu.edu [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Rachh, M. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Freidberg, J.P. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2016-01-15

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  5. A fast and accurate FPGA based QRS detection system.

    Science.gov (United States)

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  6. Accurate platelet counting in an insidious case of pseudothrombocytopenia.

    Science.gov (United States)

    Lombarts, A J; Zijlstra, J J; Peters, R H; Thomasson, C G; Franck, P F

    1999-01-01

    Anticoagulant-induced aggregation of platelets leads to pseudothrombocytopenia. Blood cell counters generally trigger alarms to alert the user. We describe an insidious case of pseudothrombocytopenia, where the complete absence of Coulter counter alarms both in ethylenediaminetetraacetic acid blood and in citrate or acid citrate dextrose blood samples was compounded by the fact that the massive aggregates were exclusively found at the edges of the blood smear. Non-recognition of pseudothrombocytopenia can have serious diagnostic and therapeutic consequences. While the anti-aggregant mixture citrate-theophylline-adenosine-dipyridamole completely failed in preventing pseudothrombocytopenia, addition of iloprost to anticoagulants only partially prevented the aggregation. Only the prior addition of gentamicin to any anticoagulant used resulted in a complete prevention of pseudothrombocytopenia and enabled to count accurately the platelets.

  7. Accurate Calculation of Fringe Fields in the LHC Main Dipoles

    CERN Document Server

    Kurz, S; Siegel, N

    2000-01-01

    The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.

  8. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Science.gov (United States)

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  9. Accurate Enthalpies of Formation of Astromolecules: Energy, Stability and Abundance

    CERN Document Server

    Etim, Emmanuel E

    2016-01-01

    Accurate enthalpies of formation are reported for known and potential astromolecules using high level ab initio quantum chemical calculations. A total of 130 molecules comprising of 31 isomeric groups and 24 cyanide/isocyanide pairs with atoms ranging from 3 to 12 have been considered. The results show an interesting, surprisingly not well explored, relationship between energy, stability and abundance (ESA) existing among these molecules. Among the isomeric species, isomers with lower enthalpies of formation are more easily observed in the interstellar medium compared to their counterparts with higher enthalpies of formation. Available data in literature confirm the high abundance of the most stable isomer over other isomers in the different groups considered. Potential for interstellar hydrogen bonding accounts for the few exceptions observed. Thus, in general, it suffices to say that the interstellar abundances of related species are directly proportional to their stabilities. The immediate consequences of ...

  10. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  11. Second-Order Accurate Projective Integrators for Multiscale Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S L; Gear, C W

    2005-05-27

    We introduce new projective versions of second-order accurate Runge-Kutta and Adams-Bashforth methods, and demonstrate their use as outer integrators in solving stiff differential systems. An important outcome is that the new outer integrators, when combined with an inner telescopic projective integrator, can result in fully explicit methods with adaptive outer step size selection and solution accuracy comparable to those obtained by implicit integrators. If the stiff differential equations are not directly available, our formulations and stability analysis are general enough to allow the combined outer-inner projective integrators to be applied to black-box legacy codes or perform a coarse-grained time integration of microscopic systems to evolve macroscopic behavior, for example.

  12. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B

    2008-02-11

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems.

  13. Fast and spectrally accurate summation of 2-periodic Stokes potentials

    CERN Document Server

    Lindbo, Dag

    2011-01-01

    We derive a Ewald decomposition for the Stokeslet in planar periodicity and a novel PME-type O(N log N) method for the fast evaluation of the resulting sums. The decomposition is the natural 2P counterpart to the classical 3P decomposition by Hasimoto, and is given in an explicit form not found in the literature. Truncation error estimates are provided to aid in selecting parameters. The fast, PME-type, method appears to be the first fast method for computing Stokeslet Ewald sums in planar periodicity, and has three attractive properties: it is spectrally accurate; it uses the minimal amount of memory that a gridded Ewald method can use; and provides clarity regarding numerical errors and how to choose parameters. Analytical and numerical results are give to support this. We explore the practicalities of the proposed method, and survey the computational issues involved in applying it to 2-periodic boundary integral Stokes problems.

  14. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D; Äystö, J; Beck, D

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  15. Accurate mass measurements on neutron-deficient krypton isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, D. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany)]. E-mail: rodriguez@lpccaen.in2p3.fr; Audi, G. [CSNSM-IN2P3-CNRS, 91405 Orsay-Campus(France); Aystoe, J. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Beck, D. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Blaum, K. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz (Germany); Bollen, G. [NSCL, Michigan State University, East Lansing, MI 48824-1321 (United States); Herfurth, F. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Jokinen, A. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Kellerbauer, A. [CERN, Division EP, 1211 Geneva 23 (Switzerland); Kluge, H.-J. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); University of Heidelberg, 69120 Heidelberg (Germany); Kolhinen, V.S. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Oinonen, M. [Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki (Finland); Sauvan, E. [Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz (Germany); Schwarz, S. [NSCL, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2006-04-17

    The masses of {sup 72-78,80,82,86}Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for {sup 72-75}Kr outweighed previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  16. Redundancy-Free, Accurate Analytical Center Machine for Classification

    Institute of Scientific and Technical Information of China (English)

    ZHENGFanzi; QIUZhengding; LengYonggang; YueJianhai

    2005-01-01

    Analytical center machine (ACM) has remarkable generalization performance based on analytical center of version space and outperforms SVM. From the analysis of geometry of machine learning and principle of ACM, it is showed that some training patterns are redundant to the definition of version space. Redundant patterns push ACM classifier away from analytical center of the prime version space so that the generalization performance degrades, at the same time redundant patterns slow down the classifier and reduce the efficiency of storage. Thus, an incremental algorithm is proposed to remove redundant patterns and embed into the frame of ACM that yields a Redundancy free accurate-Analytical center machine (RFA-ACM) for classification. Experiments with Heart, Thyroid, Banana datasets demonstrate the validity of RFA-ACM.

  17. Accurate Element of Compressive Bar considering the Effect of Displacement

    Directory of Open Access Journals (Sweden)

    Lifeng Tang

    2015-01-01

    Full Text Available By constructing the compressive bar element and developing the stiffness matrix, most issues about the compressive bar can be solved. In this paper, based on second derivative to the equilibrium differential governing equations, the displacement shape functions are got. And then the finite element formula of compressive bar element is developed by using the potential energy principle and analytical shape function. Based on the total potential energy variation principle, the static and geometrical stiffness matrices are proposed, in which the large deformation of compressive bar is considered. To verify the accurate and validity of the analytical trial function element proposed in this paper, a number of the numerical examples are presented. Comparisons show that the proposed element has high calculation efficiency and rapid speed of convergence.

  18. Accurate finite difference methods for time-harmonic wave propagation

    Science.gov (United States)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  19. Accurate and efficient maximal ball algorithm for pore network extraction

    Science.gov (United States)

    Arand, Frederick; Hesser, Jürgen

    2017-04-01

    The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.

  20. Machine Learning of Accurate Energy-Conserving Molecular Force Fields

    CERN Document Server

    Chmiela, Stefan; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert

    2016-01-01

    Using conservation of energy -- a fundamental property of closed classical and quantum mechanical systems -- we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/{\\AA} for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations...

  1. Efficient and Accurate Indoor Localization Using Landmark Graphs

    Science.gov (United States)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  2. Train and Test Tightness of LP Relaxations in Structured Prediction

    OpenAIRE

    Meshi, Ofer; Mahdavi, Mehrdad; Weller, Adrian; Sontag, David

    2015-01-01

    Structured prediction is used in areas such as computer vision and natural language processing to predict structured outputs such as segmentations or parse trees. In these settings, prediction is performed by MAP inference or, equivalently, by solving an integer linear program. Because of the complex scoring functions required to obtain accurate predictions, both learning and inference typically require the use of approximate solvers. We propose a theoretical explanation to the striking obser...

  3. An accurate solver for forward and inverse transport

    Science.gov (United States)

    Monard, François; Bal, Guillaume

    2010-07-01

    This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.

  4. Accurate skin dose measurements using radiochromic film in clinical applications.

    Science.gov (United States)

    Devic, S; Seuntjens, J; Abdel-Rahman, W; Evans, M; Olivares, M; Podgorsak, E B; Vuong, Té; Soares, Christopher G

    2006-04-01

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 micron. We used the new GAFCHROMIC dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 micron. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 micron to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10 x 10 cm2 increases from 14% to 43%. For the three GAFCHROMIC dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC film model. Finally, a procedure that uses EBT model GAFCHROMIC film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  5. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    Science.gov (United States)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  6. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  7. An accurate and portable solid state neutron rem meter

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2013-08-11

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.

  8. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  9. MHC Class II epitope predictive algorithms

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lund, Ole; Buus, S

    2010-01-01

    reasonably accurate predictions for alleles that were not included in the training data. These methods can be used to define supertypes (clusters) of MHC-II alleles where alleles within each supertype have similar binding specificities. Furthermore, the pan-specific methods have been used to make a graphical...

  10. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  11. Prediction of failure strain and burst pressure in high yield-to-tensile strength ratio linepipe

    Energy Technology Data Exchange (ETDEWEB)

    Law, M. [Institute of Materials and Engineering Science, Australian Nuclear Science and Technology Organisation (ANSTO), Lucas Heights, NSW (Australia)]. E-mail: mlx@ansto.gov.au; Bowie, G. [BlueScope Steel Ltd., Level 11, 120 Collins St, Melbourne, Victoria 3000 (Australia)

    2007-08-15

    Failure pressures and strains were predicted for a number of burst tests as part of a project to explore failure strain in high yield-to-tensile strength ratio linepipe. Twenty-three methods for predicting the burst pressure and six methods of predicting the failure strain are compared with test results. Several methods were identified which gave accurate and reliable estimates of burst pressure. No method of accurately predicting the failure strain was found, though the best was noted.

  12. An accurate determination of the surface energy of solid selenium

    Science.gov (United States)

    Guisbiers, G.; Arscott, S.; Snyders, R.

    2012-12-01

    Selenium is currently a key element for developing nano and micro-technologies. Nevertheless, the surface energy of solid selenium (γSe) reported in the literature is still questionable. In this work, we have measured γSe = 0.291 ± 0.025 J/m2 at 293 K using the sessile drop technique with different probe liquids, namely ethylene glycol, de-ionized water, mercury, and gallium. This value is in excellent agreement with theoretical predictions.

  13. Can we accurately report PTEN status in advanced colorectal cancer?

    OpenAIRE

    Hocking, Christopher; Hardingham, Jennifer E.; Broadbridge, Vy; Wrin, Joe; Townsend, Amanda R; Tebbutt, Niall; Cooper, John; Ruszkiewicz, Andrew; Lee, Chee; Price, Timothy J.

    2014-01-01

    Background Loss of phosphatase and tensin homologue (PTEN) function evaluated by loss of PTEN protein expression on immunohistochemistry (IHC) has been reported as both prognostic in metastatic colorectal cancer and predictive of response to anti-EGFR monoclonal antibodies although results remain uncertain. Difficulties in the methodological assessment of PTEN are likely to be a major contributor to recent conflicting results. Methods We assessed loss of PTEN function in 51 colorectal cancer ...

  14. Automatic classification and accurate size measurement of blank mask defects

    Science.gov (United States)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  15. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  16. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  17. Accurate in-line CD metrology for nanometer semiconductor manufacturing

    Science.gov (United States)

    Perng, Baw-Ching; Shieh, Jyu-Horng; Jang, S.-M.; Liang, M.-S.; Huang, Renee; Chen, Li-Chien; Hwang, Ruey-Lian; Hsu, Joe; Fong, David

    2006-03-01

    The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies. While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness (LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material. Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement. Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to 400nm. CD measurements of poly ADI and low k trenches are also

  18. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  19. Selective pressures for accurate altruism targeting: evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory.

    Science.gov (United States)

    Clune, Jeff; Goldsby, Heather J; Ofria, Charles; Pennock, Robert T

    2011-03-01

    Inclusive fitness theory predicts that natural selection will favour altruist genes that are more accurate in targeting altruism only to copies of themselves. In this paper, we provide evidence from digital evolution in support of this prediction by competing multiple altruist-targeting mechanisms that vary in their accuracy in determining whether a potential target for altruism carries a copy of the altruist gene. We compete altruism-targeting mechanisms based on (i) kinship (kin targeting), (ii) genetic similarity at a level greater than that expected of kin (similarity targeting), and (iii) perfect knowledge of the presence of an altruist gene (green beard targeting). Natural selection always favoured the most accurate targeting mechanism available. Our investigations also revealed that evolution did not increase the altruism level when all green beard altruists used the same phenotypic marker. The green beard altruism levels stably increased only when mutations that changed the altruism level also changed the marker (e.g. beard colour), such that beard colour reliably indicated the altruism level. For kin- and similarity-targeting mechanisms, we found that evolution was able to stably adjust altruism levels. Our results confirm that natural selection favours altruist genes that are increasingly accurate in targeting altruism to only their copies. Our work also emphasizes that the concept of targeting accuracy must include both the presence of an altruist gene and the level of altruism it produces.

  20. Quantitatively accurate calculations of conductance and thermopower of molecular junctions

    DEFF Research Database (Denmark)

    Markussen, Troels; Jin, Chengjun; Thygesen, Kristian Sommer

    2013-01-01

    ) connected to gold electrodes using first‐principles calculations. We find excellent agreement with experiments for both molecules when exchange–correlation effects are described by the many‐body GW approximation. In contrast, results from standard density functional theory (DFT) deviate from experiments......‐interaction errors and image charge effects. Finally, we show that the conductance and thermopower of the considered junctions are relatively insensitive to the metal–molecule bonding geometry. Our results demonstrate that electronic and thermoelectric properties of molecular junctions can be predicted from first‐principles...... calculations when exchange–correlation effects are taken properly into account....