WorldWideScience

Sample records for making accurate predictions

  1. Making detailed predictions makes (some) predictions worse

    Science.gov (United States)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  2. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  3. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  4. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  5. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  6. Hounsfield unit density accurately predicts ESWL success.

    Science.gov (United States)

    Magnuson, William J; Tomera, Kevin M; Lance, Raymond S

    2005-01-01

    Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p ESWL for upper tract calculi between 5-15mm.

  7. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  8. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  9. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions.

    Science.gov (United States)

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-07-07

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.

  10. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    Directory of Open Access Journals (Sweden)

    Xin Deng

    2015-07-01

    Full Text Available Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.

  11. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  12. Making predictions in the multiverse

    International Nuclear Information System (INIS)

    Freivogel, Ben

    2011-01-01

    I describe reasons to think we are living in an eternally inflating multiverse where the observable 'constants' of nature vary from place to place. The major obstacle to making predictions in this context is that we must regulate the infinities of eternal inflation. I review a number of proposed regulators, or measures. Recent work has ruled out a number of measures by showing that they conflict with observation, and focused attention on a few proposals. Further, several different measures have been shown to be equivalent. I describe some of the many nontrivial tests these measures will face as we learn more from theory, experiment and observation.

  13. Making predictions in the multiverse

    Energy Technology Data Exchange (ETDEWEB)

    Freivogel, Ben, E-mail: benfreivogel@gmail.com [Center for Theoretical Physics and Laboratory for Nuclear Science, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2011-10-21

    I describe reasons to think we are living in an eternally inflating multiverse where the observable 'constants' of nature vary from place to place. The major obstacle to making predictions in this context is that we must regulate the infinities of eternal inflation. I review a number of proposed regulators, or measures. Recent work has ruled out a number of measures by showing that they conflict with observation, and focused attention on a few proposals. Further, several different measures have been shown to be equivalent. I describe some of the many nontrivial tests these measures will face as we learn more from theory, experiment and observation.

  14. Towards more accurate and reliable predictions for nuclear applications

    International Nuclear Information System (INIS)

    Goriely, S.

    2015-01-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally. (orig.)

  15. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  16. Accurate prediction of the enthalpies of formation for xanthophylls.

    Science.gov (United States)

    Lii, Jenn-Huei; Liao, Fu-Xing; Hu, Ching-Han

    2011-11-30

    This study investigates the applications of computational approaches in the prediction of enthalpies of formation (ΔH(f)) for C-, H-, and O-containing compounds. Molecular mechanics (MM4) molecular mechanics method, density functional theory (DFT) combined with the atomic equivalent (AE) and group equivalent (GE) schemes, and DFT-based correlation corrected atomization (CCAZ) were used. We emphasized on the application to xanthophylls, C-, H-, and O-containing carotenoids which consist of ∼ 100 atoms and extended π-delocaization systems. Within the training set, MM4 predictions are more accurate than those obtained using AE and GE; however a systematic underestimation was observed in the extended systems. ΔH(f) for the training set molecules predicted by CCAZ combined with DFT are in very good agreement with the G3 results. The average absolute deviations (AADs) of CCAZ combined with B3LYP and MPWB1K are 0.38 and 0.53 kcal/mol compared with the G3 data, and are 0.74 and 0.69 kcal/mol compared with the available experimental data, respectively. Consistency of the CCAZ approach for the selected xanthophylls is revealed by the AAD of 2.68 kcal/mol between B3LYP-CCAZ and MPWB1K-CCAZ. Copyright © 2011 Wiley Periodicals, Inc.

  17. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  18. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  19. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    Science.gov (United States)

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  20. Feedforward signal prediction for accurate motion systems using digital filters

    NARCIS (Netherlands)

    Butler, H.

    2012-01-01

    A positioning system that needs to accurately track a reference can benefit greatly from using feedforward. When using a force actuator, the feedforward needs to generate a force proportional to the reference acceleration, which can be measured by means of an accelerometer or can be created by

  1. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    Science.gov (United States)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  2. Accurate Prediction of Coronary Artery Disease Using Bioinformatics Algorithms

    Directory of Open Access Journals (Sweden)

    Hajar Shafiee

    2016-06-01

    Full Text Available Background and Objectives: Cardiovascular disease is one of the main causes of death in developed and Third World countries. According to the statement of the World Health Organization, it is predicted that death due to heart disease will rise to 23 million by 2030. According to the latest statistics reported by Iran’s Minister of health, 3.39% of all deaths are attributed to cardiovascular diseases and 19.5% are related to myocardial infarction. The aim of this study was to predict coronary artery disease using data mining algorithms. Methods: In this study, various bioinformatics algorithms, such as decision trees, neural networks, support vector machines, clustering, etc., were used to predict coronary heart disease. The data used in this study was taken from several valid databases (including 14 data. Results: In this research, data mining techniques can be effectively used to diagnose different diseases, including coronary artery disease. Also, for the first time, a prediction system based on support vector machine with the best possible accuracy was introduced. Conclusion: The results showed that among the features, thallium scan variable is the most important feature in the diagnosis of heart disease. Designation of machine prediction models, such as support vector machine learning algorithm can differentiate between sick and healthy individuals with 100% accuracy.

  3. Can phenological models predict tree phenology accurately under climate change conditions?

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  4. PredictSNP: robust and accurate consensus classifier for prediction of disease-related mutations.

    Directory of Open Access Journals (Sweden)

    Jaroslav Bendl

    2014-01-01

    Full Text Available Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp.

  5. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    Science.gov (United States)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  6. Predicting accurate absolute binding energies in aqueous solution

    DEFF Research Database (Denmark)

    Jensen, Jan Halborg

    2015-01-01

    Recent predictions of absolute binding free energies of host-guest complexes in aqueous solution using electronic structure theory have been encouraging for some systems, while other systems remain problematic. In this paper I summarize some of the many factors that could easily contribute 1-3 kcal......-represented by continuum models. While I focus on binding free energies in aqueous solution the approach also applies (with minor adjustments) to any free energy difference such as conformational or reaction free energy differences or activation free energies in any solvent....

  7. CFD-FEM coupling for accurate prediction of thermal fatigue

    International Nuclear Information System (INIS)

    Hannink, M.H.C.; Kuczaj, A.K.; Blom, F.J.; Church, J.M.; Komen, E.M.J.

    2009-01-01

    Thermal fatigue is a safety related issue in primary pipework systems of nuclear power plants. Life extension of current reactors and the design of a next generation of new reactors lead to growing importance of research in this direction. The thermal fatigue degradation mechanism is induced by temperature fluctuations in a fluid, which arise from mixing of hot and cold flows. Accompanied physical phenomena include thermal stratification, thermal striping, and turbulence [1]. Current plant instrumentation systems allow monitoring of possible causes as stratification and temperature gradients at fatigue susceptible locations [1]. However, high-cycle temperature fluctuations associated with turbulent mixing cannot be adequately detected by common thermocouple instrumentations. For a proper evaluation of thermal fatigue, therefore, numerical simulations are necessary that couple instantaneous fluid and solid interactions. In this work, a strategy for the numerical prediction of thermal fatigue is presented. The approach couples Computational Fluid Dynamics (CFD) and the Finite Element Method (FEM). For the development of the computational approach, a classical test case for the investigation of thermal fatigue problems is studied, i.e. mixing in a T-junction. Due to turbulent mixing of hot and cold fluids in two perpendicularly connected pipes, temperature fluctuations arise in the mixing zone downstream in the flow. Subsequently, these temperature fluctuations are also induced in the pipes. The stresses that arise due to the fluctuations may eventually lead to thermal fatigue. In the first step of the applied procedure, the temperature fluctuations in both fluid and structure are calculated using the CFD method. Subsequently, the temperature fluctuations in the structure are imposed as thermal loads in a FEM model of the pipes. A mechanical analysis is then performed to determine the thermal stresses, which are used to predict the fatigue lifetime of the structure

  8. "When does making detailed predictions make predictions worse?": Correction to Kelly and Simmons (2016).

    Science.gov (United States)

    2016-10-01

    Reports an error in "When Does Making Detailed Predictions Make Predictions Worse" by Theresa F. Kelly and Joseph P. Simmons ( Journal of Experimental Psychology: General , Advanced Online Publication, Aug 8, 2016, np). In the article, the symbols in Figure 2 were inadvertently altered in production. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-37952-001.) In this article, we investigate whether making detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be susceptible to the negative effect of making detailed predictions. PsycINFO Database Record (c) 2016 APA, all rights reserved

  9. Change in BMI accurately predicted by social exposure to acquaintances.

    Science.gov (United States)

    Oloritun, Rahman O; Ouarda, Taha B M J; Moturu, Sai; Madan, Anmol; Pentland, Alex Sandy; Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R(2). This study found a model that explains 68% (pchange in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends.

  10. Change in BMI accurately predicted by social exposure to acquaintances.

    Directory of Open Access Journals (Sweden)

    Rahman O Oloritun

    Full Text Available Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC and R(2. This study found a model that explains 68% (p<0.0001 of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as

  11. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

    Science.gov (United States)

    Xie, Tian; Grossman, Jeffrey C.

    2018-04-01

    The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 1 04 data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

  12. Towards accurate performance prediction of a vertical axis wind turbine operating at different tip speed ratios

    NARCIS (Netherlands)

    Rezaeiha, A.; Kalkman, I.; Blocken, B.J.E.

    2017-01-01

    Accurate prediction of the performance of a vertical-axis wind turbine (VAWT) using CFD simulation requires the employment of a sufficiently fine azimuthal increment (dθ) combined with a mesh size at which essential flow characteristics can be accurately resolved. Furthermore, the domain size needs

  13. Prediction of Accurate Mixed Mode Fatigue Crack Growth Curves using the Paris' Law

    Science.gov (United States)

    Sajith, S.; Krishna Murthy, K. S. R.; Robi, P. S.

    2017-12-01

    Accurate information regarding crack growth times and structural strength as a function of the crack size is mandatory in damage tolerance analysis. Various equivalent stress intensity factor (SIF) models are available for prediction of mixed mode fatigue life using the Paris' law. In the present investigation these models have been compared to assess their efficacy in prediction of the life close to the experimental findings as there are no guidelines/suggestions available on selection of these models for accurate and/or conservative predictions of fatigue life. Within the limitations of availability of experimental data and currently available numerical simulation techniques, the results of present study attempts to outline models that would provide accurate and conservative life predictions.

  14. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    Science.gov (United States)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  15. Influential Factors for Accurate Load Prediction in a Demand Response Context

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Kjærgaard, Mikkel Baun; Jørgensen, Bo Nørregaard

    2016-01-01

    Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence....... Next, the time of day that is being predicted greatly influence the prediction which is related to the weather pattern. By presenting these results we hope to improve the modeling of building loads and algorithms for Demand Response planning.......Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence...

  16. Does the emergency surgery score accurately predict outcomes in emergent laparotomies?

    Science.gov (United States)

    Peponis, Thomas; Bohnen, Jordan D; Sangji, Naveen F; Nandan, Anirudh R; Han, Kelsey; Lee, Jarone; Yeh, D Dante; de Moya, Marc A; Velmahos, George C; Chang, David C; Kaafarani, Haytham M A

    2017-08-01

    The emergency surgery score is a mortality-risk calculator for emergency general operation patients. We sought to examine whether the emergency surgery score predicts 30-day morbidity and mortality in a high-risk group of patients undergoing emergent laparotomy. Using the 2011-2012 American College of Surgeons National Surgical Quality Improvement Program database, we identified all patients who underwent emergent laparotomy using (1) the American College of Surgeons National Surgical Quality Improvement Program definition of "emergent," and (2) all Current Procedural Terminology codes denoting a laparotomy, excluding aortic aneurysm rupture. Multivariable logistic regression analyses were performed to measure the correlation (c-statistic) between the emergency surgery score and (1) 30-day mortality, and (2) 30-day morbidity after emergent laparotomy. As sensitivity analyses, the correlation between the emergency surgery score and 30-day mortality was also evaluated in prespecified subgroups based on Current Procedural Terminology codes. A total of 26,410 emergent laparotomy patients were included. Thirty-day mortality and morbidity were 10.2% and 43.8%, respectively. The emergency surgery score correlated well with mortality (c-statistic = 0.84); scores of 1, 11, and 22 correlated with mortalities of 0.4%, 39%, and 100%, respectively. Similarly, the emergency surgery score correlated well with morbidity (c-statistic = 0.74); scores of 0, 7, and 11 correlated with complication rates of 13%, 58%, and 79%, respectively. The morbidity rates plateaued for scores higher than 11. Sensitivity analyses demonstrated that the emergency surgery score effectively predicts mortality in patients undergoing emergent (1) splenic, (2) gastroduodenal, (3) intestinal, (4) hepatobiliary, or (5) incarcerated ventral hernia operation. The emergency surgery score accurately predicts outcomes in all types of emergent laparotomy patients and may prove valuable as a bedside decision-making

  17. Brain Stimulation Reward Supports More Consistent and Accurate Rodent Decision-Making than Food Reward.

    Science.gov (United States)

    McMurray, Matthew S; Conway, Sineadh M; Roitman, Jamie D

    2017-01-01

    Animal models of decision-making rely on an animal's motivation to decide and its ability to detect differences among various alternatives. Food reinforcement, although commonly used, is associated with problematic confounds, especially satiety. Here, we examined the use of brain stimulation reward (BSR) as an alternative reinforcer in rodent models of decision-making and compared it with the effectiveness of sugar pellets. The discriminability of various BSR frequencies was compared to differing numbers of sugar pellets in separate free-choice tasks. We found that BSR was more discriminable and motivated greater task engagement and more consistent preference for the larger reward. We then investigated whether rats prefer BSR of varying frequencies over sugar pellets. We found that animals showed either a clear preference for sugar reward or no preference between reward modalities, depending on the frequency of the BSR alternative and the size of the sugar reward. Overall, these results suggest that BSR is an effective reinforcer in rodent decision-making tasks, removing food-related confounds and resulting in more accurate, consistent, and reliable metrics of choice.

  18. Heart rate during basketball game play and volleyball drills accurately predicts oxygen uptake and energy expenditure.

    Science.gov (United States)

    Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J

    2015-09-01

    There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.

  19. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    Science.gov (United States)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  20. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  1. NNLOPS accurate predictions for $W^+W^-$ production arXiv

    CERN Document Server

    Re, Emanuele; Zanderighi, Giulia

    We present novel predictions for the production of $W^+W^-$ pairs in hadron collisions that are next-to-next-to-leading order accurate and consistently matched to a parton shower (NNLOPS). All diagrams that lead to the process $pp\\to e^- \\bar \

  2. Towards cycle-accurate performance predictions for real-time embedded systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.; Arabnia, H.R.; Deligiannidis, L.; Jandieri, G.

    2013-01-01

    In this paper we present a model-based performance analysis method for component-based real-time systems, featuring cycle-accurate predictions of latencies and enhanced system robustness. The method incorporates the following phases: (a) instruction-level profiling of SW components, (b) modeling the

  3. Fuzzy Predictions for Strategic Decision Making

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Andersen, Torben Juul; Tveterås, Sigbjørn

    This article theorizes a new way to predict firm performance based on aggregation of sensing among frontline employees about changes in operational capabilities to update strategic action plans. We frame the approach in the context of first- and second-generation prediction markets and outline it...

  4. ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.

    Science.gov (United States)

    Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P

    2016-11-01

    ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P DRAGON score estimates (P DRAGON score estimates (P DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.

  5. Accurate wavelength prediction of photonic crystal resonant reflection and applications in refractive index measurement

    DEFF Research Database (Denmark)

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron L. C.

    2014-01-01

    and superstrate materials. The importance of accounting for material dispersion in order to obtain accurate simulation results is highlighted, and a method for doing so using an iterative approach is demonstrated. Furthermore, an application for the model is demonstrated, in which the material dispersion......In the past decade, photonic crystal resonant reflectors have been increasingly used as the basis for label-free biochemical assays in lab-on-a-chip applications. In both designing and interpreting experimental results, an accurate model describing the optical behavior of such structures...... is essential. Here, an analytical method for precisely predicting the absolute positions of resonantly reflected wavelengths is presented. The model is experimentally verified to be highly accurate using nanoreplicated, polymer-based photonic crystal grating reflectors with varying grating periods...

  6. A highly accurate predictive-adaptive method for lithium-ion battery remaining discharge energy prediction in electric vehicle applications

    International Nuclear Information System (INIS)

    Liu, Guangming; Ouyang, Minggao; Lu, Languang; Li, Jianqiu; Hua, Jianfeng

    2015-01-01

    Highlights: • An energy prediction (EP) method is introduced for battery E RDE determination. • EP determines E RDE through coupled prediction of future states, parameters, and output. • The PAEP combines parameter adaptation and prediction to update model parameters. • The PAEP provides improved E RDE accuracy compared with DC and other EP methods. - Abstract: In order to estimate the remaining driving range (RDR) in electric vehicles, the remaining discharge energy (E RDE ) of the applied battery system needs to be precisely predicted. Strongly affected by the load profiles, the available E RDE varies largely in real-world applications and requires specific determination. However, the commonly-used direct calculation (DC) method might result in certain energy prediction errors by relating the E RDE directly to the current state of charge (SOC). To enhance the E RDE accuracy, this paper presents a battery energy prediction (EP) method based on the predictive control theory, in which a coupled prediction of future battery state variation, battery model parameter change, and voltage response, is implemented on the E RDE prediction horizon, and the E RDE is subsequently accumulated and real-timely optimized. Three EP approaches with different model parameter updating routes are introduced, and the predictive-adaptive energy prediction (PAEP) method combining the real-time parameter identification and the future parameter prediction offers the best potential. Based on a large-format lithium-ion battery, the performance of different E RDE calculation methods is compared under various dynamic profiles. Results imply that the EP methods provide much better accuracy than the traditional DC method, and the PAEP could reduce the E RDE error by more than 90% and guarantee the relative energy prediction error under 2%, proving as a proper choice in online E RDE prediction. The correlation of SOC estimation and E RDE calculation is then discussed to illustrate the

  7. LocARNA-P: Accurate boundary prediction and improved detection of structural RNAs

    DEFF Research Database (Denmark)

    Will, Sebastian; Joshi, Tejal; Hofacker, Ivo L.

    2012-01-01

    Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing...... methods struggle with these goals because they rely on sequence-based multiple sequence alignments, which regularly misalign RNA structure and therefore do not support identification of structural similarities. To overcome this limitation, we compute columnwise and global reliabilities of alignments based...... on sequence and structure similarity; we refer to these structure-based alignment reliabilities as STARs. The columnwise STARs of alignments, or STAR profiles, provide a versatile tool for the manual and automatic analysis of ncRNAs. In particular, we improve the boundary prediction of the widely used nc...

  8. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  9. Rapid and accurate prediction and scoring of water molecules in protein binding sites.

    Directory of Open Access Journals (Sweden)

    Gregory A Ross

    Full Text Available Water plays a critical role in ligand-protein interactions. However, it is still challenging to predict accurately not only where water molecules prefer to bind, but also which of those water molecules might be displaceable. The latter is often seen as a route to optimizing affinity of potential drug candidates. Using a protocol we call WaterDock, we show that the freely available AutoDock Vina tool can be used to predict accurately the binding sites of water molecules. WaterDock was validated using data from X-ray crystallography, neutron diffraction and molecular dynamics simulations and correctly predicted 97% of the water molecules in the test set. In addition, we combined data-mining, heuristic and machine learning techniques to develop probabilistic water molecule classifiers. When applied to WaterDock predictions in the Astex Diverse Set of protein ligand complexes, we could identify whether a water molecule was conserved or displaced to an accuracy of 75%. A second model predicted whether water molecules were displaced by polar groups or by non-polar groups to an accuracy of 80%. These results should prove useful for anyone wishing to undertake rational design of new compounds where the displacement of water molecules is being considered as a route to improved affinity.

  10. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    International Nuclear Information System (INIS)

    Amini, Y; Emdad, H; Farid, M

    2014-01-01

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  11. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    International Nuclear Information System (INIS)

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; Lilienfeld, O. Anatole von; Müller, Klaus-Robert; Tkatchenko, Alexandre

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the 'holy grail' of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies

  12. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-01-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate...

  13. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    Science.gov (United States)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  14. The MIDAS touch for Accurately Predicting the Stress-Strain Behavior of Tantalum

    Energy Technology Data Exchange (ETDEWEB)

    Jorgensen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-02

    Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].

  15. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    Science.gov (United States)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  16. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    Science.gov (United States)

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  17. XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.

    Science.gov (United States)

    Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua

    2013-12-23

    Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.

  18. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    Science.gov (United States)

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Observational cohort study. Academic medical center from January 2013 to December 2015. Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. None. High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the

  19. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    Science.gov (United States)

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  20. Microbiome Data Accurately Predicts the Postmortem Interval Using Random Forest Regression Models

    Directory of Open Access Journals (Sweden)

    Aeriel Belk

    2018-02-01

    Full Text Available Death investigations often include an effort to establish the postmortem interval (PMI in cases in which the time of death is uncertain. The postmortem interval can lead to the identification of the deceased and the validation of witness statements and suspect alibis. Recent research has demonstrated that microbes provide an accurate clock that starts at death and relies on ecological change in the microbial communities that normally inhabit a body and its surrounding environment. Here, we explore how to build the most robust Random Forest regression models for prediction of PMI by testing models built on different sample types (gravesoil, skin of the torso, skin of the head, gene markers (16S ribosomal RNA (rRNA, 18S rRNA, internal transcribed spacer regions (ITS, and taxonomic levels (sequence variants, species, genus, etc.. We also tested whether particular suites of indicator microbes were informative across different datasets. Generally, results indicate that the most accurate models for predicting PMI were built using gravesoil and skin data using the 16S rRNA genetic marker at the taxonomic level of phyla. Additionally, several phyla consistently contributed highly to model accuracy and may be candidate indicators of PMI.

  1. Predicting Falls in People with Multiple Sclerosis: Fall History Is as Accurate as More Complex Measures

    Directory of Open Access Journals (Sweden)

    Michelle H. Cameron

    2013-01-01

    Full Text Available Background. Many people with MS fall, but the best method for identifying those at increased fall risk is not known. Objective. To compare how accurately fall history, questionnaires, and physical tests predict future falls and injurious falls in people with MS. Methods. 52 people with MS were asked if they had fallen in the past 2 months and the past year. Subjects were also assessed with the Activities-specific Balance Confidence, Falls Efficacy Scale-International, and Multiple Sclerosis Walking Scale-12 questionnaires, the Expanded Disability Status Scale, Timed 25-Foot Walk, and computerized dynamic posturography and recorded their falls daily for the following 6 months with calendars. The ability of baseline assessments to predict future falls was compared using receiver operator curves and logistic regression. Results. All tests individually provided similar fall prediction (area under the curve (AUC 0.60–0.75. A fall in the past year was the best predictor of falls (AUC 0.75, sensitivity 0.89, specificity 0.56 or injurious falls (AUC 0.69, sensitivity 0.96, specificity 0.41 in the following 6 months. Conclusion. Simply asking people with MS if they have fallen in the past year predicts future falls and injurious falls as well as more complex, expensive, or time-consuming approaches.

  2. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    Science.gov (United States)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts

  3. Decision-making in schizophrenia: A predictive-coding perspective.

    Science.gov (United States)

    Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas

    2018-05-31

    Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  5. Improvement of a land surface model for accurate prediction of surface energy and water balances

    International Nuclear Information System (INIS)

    Katata, Genki

    2009-02-01

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  6. Watershed area ratio accurately predicts daily streamflow in nested catchments in the Catskills, New York

    Directory of Open Access Journals (Sweden)

    Chris C. Gianfagna

    2015-09-01

    New hydrological insights for the region: Watershed area ratio was the most important basin parameter for estimating flow at upstream sites based on downstream flow. The area ratio alone explained 93% of the variance in the slopes of relationships between upstream and downstream flows. Regression analysis indicated that flow at any upstream point can be estimated by multiplying the flow at a downstream reference gage by the watershed area ratio. This method accurately predicted upstream flows at area ratios as low as 0.005. We also observed a very strong relationship (R2 = 0.79 between area ratio and flow–flow slopes in non-nested catchments. Our results indicate that a simple flow estimation method based on watershed area ratios is justifiable, and indeed preferred, for the estimation of daily streamflow in ungaged watersheds in the Catskills region.

  7. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    Directory of Open Access Journals (Sweden)

    Matthew Almond Sochor

    2014-07-01

    Full Text Available A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system.

  8. Water Habitat Study: Prediction Makes It More Meaningful.

    Science.gov (United States)

    Glasgow, Dennis R.

    1982-01-01

    Suggests a teaching strategy for water habitat studies to help students make a meaningful connection between physiochemical data (dissolved oxygen content, pH, and water temperature) and biological specimens they collect. Involves constructing a poster and using it to make predictions. Provides sample poster. (DC)

  9. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul [Heat Island Group, Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  10. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    Science.gov (United States)

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  11. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  12. A machine learned classifier that uses gene expression data to accurately predict estrogen receptor status.

    Directory of Open Access Journals (Sweden)

    Meysam Bastani

    Full Text Available BACKGROUND: Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. METHODS: To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. RESULTS: This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. CONCLUSIONS: Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions.

  13. Does future-oriented thinking predict adolescent decision making?

    Science.gov (United States)

    Eskritt, Michelle; Doucette, Jesslyn; Robitaille, Lori

    2014-01-01

    A number of theorists, as well as plain common sense, suggest that future-oriented thinking (FOT) should be involved in decision making; therefore, the development of FOT should be related to better quality decision making. FOT and quality of the decision making were measured in adolescents as well as adults in 2 different experiments. Though the results of the first experiment revealed an increase in quality of decision making across adolescence into adulthood, there was no relationship between FOT and decision making. In the second experiment, FOT predicted performance on a more deliberative decision-making task independent of age, but not performance on the Iowa Gambling Task (IGT). Performance on the IGT was instead related to emotion regulation. The study's findings suggest that FOT can be related to reflective decision making but not necessarily decision making that is more intuitive.

  14. Highly accurate prediction of food challenge outcome using routinely available clinical data.

    Science.gov (United States)

    DunnGalvin, Audrey; Daly, Deirdre; Cullinane, Claire; Stenke, Emily; Keeton, Diane; Erlewyn-Lajeunesse, Mich; Roberts, Graham C; Lucas, Jane; Hourihane, Jonathan O'B

    2011-03-01

    Serum specific IgE or skin prick tests are less useful at levels below accepted decision points. We sought to develop and validate a model to predict food challenge outcome by using routinely collected data in a diverse sample of children considered suitable for food challenge. The proto-algorithm was generated by using a limited data set from 1 service (phase 1). We retrospectively applied, evaluated, and modified the initial model by using an extended data set in another center (phase 2). Finally, we prospectively validated the model in a blind study in a further group of children undergoing food challenge for peanut, milk, or egg in the second center (phase 3). Allergen-specific models were developed for peanut, egg, and milk. Phase 1 (N = 429) identified 5 clinical factors associated with diagnosis of food allergy by food challenge. In phase 2 (N = 289), we examined the predictive ability of 6 clinical factors: skin prick test, serum specific IgE, total IgE minus serum specific IgE, symptoms, sex, and age. In phase 3 (N = 70), 97% of cases were accurately predicted as positive and 94% as negative. Our model showed an advantage in clinical prediction compared with serum specific IgE only, skin prick test only, and serum specific IgE and skin prick test (92% accuracy vs 57%, and 81%, respectively). Our findings have implications for the improved delivery of food allergy-related health care, enhanced food allergy-related quality of life, and economized use of health service resources by decreasing the number of food challenges performed. Copyright © 2011 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  15. Method to make accurate concentration and isotopic measurements for small gas samples

    Science.gov (United States)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  16. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    Energy Technology Data Exchange (ETDEWEB)

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  17. Accurate and Reliable Prediction of the Binding Affinities of Macrocycles to Their Protein Targets.

    Science.gov (United States)

    Yu, Haoyu S; Deng, Yuqing; Wu, Yujie; Sindhikara, Dan; Rask, Amy R; Kimura, Takayuki; Abel, Robert; Wang, Lingle

    2017-12-12

    Macrocycles have been emerging as a very important drug class in the past few decades largely due to their expanded chemical diversity benefiting from advances in synthetic methods. Macrocyclization has been recognized as an effective way to restrict the conformational space of acyclic small molecule inhibitors with the hope of improving potency, selectivity, and metabolic stability. Because of their relatively larger size as compared to typical small molecule drugs and the complexity of the structures, efficient sampling of the accessible macrocycle conformational space and accurate prediction of their binding affinities to their target protein receptors poses a great challenge of central importance in computational macrocycle drug design. In this article, we present a novel method for relative binding free energy calculations between macrocycles with different ring sizes and between the macrocycles and their corresponding acyclic counterparts. We have applied the method to seven pharmaceutically interesting data sets taken from recent drug discovery projects including 33 macrocyclic ligands covering a diverse chemical space. The predicted binding free energies are in good agreement with experimental data with an overall root-mean-square error (RMSE) of 0.94 kcal/mol. This is to our knowledge the first time where the free energy of the macrocyclization of linear molecules has been directly calculated with rigorous physics-based free energy calculation methods, and we anticipate the outstanding accuracy demonstrated here across a broad range of target classes may have significant implications for macrocycle drug discovery.

  18. Do Dual-Route Models Accurately Predict Reading and Spelling Performance in Individuals with Acquired Alexia and Agraphia?

    OpenAIRE

    Rapcsak, Steven Z.; Henry, Maya L.; Teague, Sommer L.; Carnahan, Susan D.; Beeson, Pélagie M.

    2007-01-01

    Coltheart and colleagues (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Castles, Bates, & Coltheart, 2006) have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult...

  19. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    International Nuclear Information System (INIS)

    Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay

    2012-01-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio® treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  20. New and Accurate Predictive Model for the Efficacy of Extracorporeal Shock Wave Therapy in Managing Patients With Chronic Plantar Fasciitis.

    Science.gov (United States)

    Yin, Mengchen; Chen, Ni; Huang, Quan; Marla, Anastasia Sulindro; Ma, Junming; Ye, Jie; Mo, Wen

    2017-12-01

    Youden index was .4243, .3003, and .7189, respectively. The Hosmer-Lemeshow test showed a good fitting of the predictive model, with an overall accuracy of 89.6%. This study establishes a new and accurate predictive model for the efficacy of ESWT in managing patients with chronic plantar fasciitis. The use of these parameters, in the form of a predictive model for ESWT efficacy, has the potential to improve decision-making in the application of ESWT. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. How accurate is anatomic limb alignment in predicting mechanical limb alignment after total knee arthroplasty?

    Science.gov (United States)

    Lee, Seung Ah; Choi, Sang-Hee; Chang, Moon Jong

    2015-10-27

    Anatomic limb alignment often differs from mechanical limb alignment after total knee arthroplasty (TKA). We sought to assess the accuracy, specificity, and sensitivity for each of three commonly used ranges for anatomic limb alignment (3-9°, 5-10° and 2-10°) in predicting an acceptable range (neutral ± 3°) for mechanical limb alignment after TKA. We also assessed whether the accuracy of anatomic limb alignment was affected by anatomic variation. This retrospective study included 314 primary TKAs. The alignment of the limb was measured with both anatomic and mechanical methods of measurement. We also measured anatomic variation, including the femoral bowing angle, tibial bowing angle, and neck-shaft angle of the femur. All angles were measured on the same full-length standing anteroposterior radiographs. The accuracy, specificity, and sensitivity for each range of anatomic limb alignment were calculated and compared using mechanical limb alignment as the reference standard. The associations between the accuracy of anatomic limb alignment and anatomic variation were also determined. The range of 2-10° for anatomic limb alignment showed the highest accuracy, but it was only 73 % (3-9°, 65 %; 5-10°, 67 %). The specificity of the 2-10° range was 81 %, which was higher than that of the other ranges (3-9°, 69 %; 5-10°, 67 %). However, the sensitivity of the 2-10° range to predict varus malalignment was only 16 % (3-9°, 35 %; 5-10°, 68 %). In addition, the sensitivity of the 2-10° range to predict valgus malalignment was only 43 % (3-9°, 71 %; 5-10°, 43 %). The accuracy of anatomical limb alignment was lower for knees with greater femoral (odds ratio = 1.2) and tibial (odds ratio = 1.2) bowing. Anatomic limb alignment did not accurately predict mechanical limb alignment after TKA, and its accuracy was affected by anatomic variation. Thus, alignment after TKA should be assessed by measuring mechanical alignment rather than anatomic

  2. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-01-01

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  3. Predicting suitable optoelectronic properties of monoclinic VON semiconductor crystals for photovoltaics using accurate first-principles computations

    KAUST Repository

    Harb, Moussab

    2015-08-26

    Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.

  4. A Novel Fibrosis Index Comprising a Non-Cholesterol Sterol Accurately Predicts HCV-Related Liver Cirrhosis

    DEFF Research Database (Denmark)

    Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin

    2014-01-01

    of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive...

  5. Cluster abundance in chameleon f ( R ) gravity I: toward an accurate halo mass function prediction

    Energy Technology Data Exchange (ETDEWEB)

    Cataneo, Matteo; Rapetti, David [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen (Denmark); Lombriser, Lucas [Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ (United Kingdom); Li, Baojiu, E-mail: matteoc@dark-cosmology.dk, E-mail: drapetti@dark-cosmology.dk, E-mail: llo@roe.ac.uk, E-mail: baojiu.li@durham.ac.uk [Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom)

    2016-12-01

    We refine the mass and environment dependent spherical collapse model of chameleon f ( R ) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N -body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f ( R ) halo abundance with respect to that of General Relativity (GR) within a precision of ∼< 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f ( R ) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.

  6. ROCK I Has More Accurate Prognostic Value than MET in Predicting Patient Survival in Colorectal Cancer.

    Science.gov (United States)

    Li, Jian; Bharadwaj, Shruthi S; Guzman, Grace; Vishnubhotla, Ramana; Glover, Sarah C

    2015-06-01

    Colorectal cancer remains the second leading cause of death in the United States despite improvements in incidence rates and advancements in screening. The present study evaluated the prognostic value of two tumor markers, MET and ROCK I, which have been noted in other cancers to provide more accurate prognoses of patient outcomes than tumor staging alone. We constructed a tissue microarray from surgical specimens of adenocarcinomas from 108 colorectal cancer patients. Using immunohistochemistry, we examined the expression levels of tumor markers MET and ROCK I, with a pathologist blinded to patient identities and clinical outcomes providing the scoring of MET and ROCK I expression. We then used retrospective analysis of patients' survival data to provide correlations with expression levels of MET and ROCK I. Both MET and ROCK I were significantly over-expressed in colorectal cancer tissues, relative to the unaffected adjacent mucosa. Kaplan-Meier survival analysis revealed that patients' 5-year survival was inversely correlated with levels of expression of ROCK I. In contrast, MET was less strongly correlated with five-year survival. ROCK I provides better efficacy in predicting patient outcomes, compared to either tumor staging or MET expression. As a result, ROCK I may provide a less invasive method of assessing patient prognoses and directing therapeutic interventions. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  7. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  8. Absolute Hounsfield unit measurement on noncontrast computed tomography cannot accurately predict struvite stone composition.

    Science.gov (United States)

    Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj

    2013-02-01

    The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if pR=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the suspicion for a pure struvite calculus.

  9. Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients

    International Nuclear Information System (INIS)

    Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne

    2010-01-01

    Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage ≤T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of ≤6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.

  10. Predicting preferences: a neglected aspect of shared decision‐making

    Science.gov (United States)

    Sevdalis, Nick; Harvey, Nigel

    2006-01-01

    Abstract In recent years, shared decision‐making between patients and doctors regarding choice of treatment has become an issue of priority. Although patients’ preferences lie at the core of the literature on shared decision‐making, there has not been any attempt so far to link the concept of shared decision‐making with the extensive behavioural literature on people's self‐predictions of their future preferences. The aim of the present review is to provide this link. First, we summarize behavioural research that suggests that people mispredict their future preferences and feelings. Secondly, we provide the main psychological accounts for people's mispredictions. Thirdly, we suggest three main empirical questions for inclusion in a programme aimed at enriching our understanding of shared decision‐making and improving the procedures used for putting it into practice. PMID:16911138

  11. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Science.gov (United States)

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.

  12. Large arterial occlusive strokes as a medical emergency: need to accurately predict clot location.

    Science.gov (United States)

    Vanacker, Peter; Faouzi, Mohamed; Eskandari, Ashraf; Maeder, Philippe; Meuli, Reto; Michel, Patrik

    2017-10-01

    Endovascular treatment for acute ischemic stroke with a large intracranial occlusion was recently shown to be effective. Timely knowledge of the presence, site, and extent of arterial occlusions in the ischemic territory has the potential to influence patient selection for endovascular treatment. We aimed to find predictors of large vessel occlusive strokes, on the basis of available demographic, clinical, radiological, and laboratory data in the emergency setting. Patients enrolled in ASTRAL registry with acute ischemic stroke and computed tomography (CT)-angiography within 12 h of stroke onset were selected and categorized according to occlusion site. Easily accessible variables were used in a multivariate analysis. Of 1645 patients enrolled, a significant proportion (46.2%) had a large vessel occlusion in the ischemic territory. The main clinical predictors of any arterial occlusion were in-hospital stroke [odd ratios (OR) 2.1, 95% confidence interval 1.4-3.1], higher initial National Institute of Health Stroke Scale (OR 1.1, 1.1-1.2), presence of visual field defects (OR 1.9, 1.3-2.6), dysarthria (OR 1.4, 1.0-1.9), or hemineglect (OR 2.0, 1.4-2.8) at admission and atrial fibrillation (OR 1.7, 1.2-2.3). Further, the following radiological predictors were identified: time-to-imaging (OR 0.9, 0.9-1.0), early ischemic changes (OR 2.3, 1.7-3.2), and silent lesions on CT (OR 0.7, 0.5-1.0). The area under curve for this analysis was 0.85. Looking at different occlusion sites, National Institute of Health Stroke Scale and early ischemic changes on CT were independent predictors in all subgroups. Neurological deficits, stroke risk factors, and CT findings accurately identify acute ischemic stroke patients at risk of symptomatic vessel occlusion. Predicting the presence of these occlusions may impact emergency stroke care in regions with limited access to noninvasive vascular imaging.

  13. Combining structural modeling with ensemble machine learning to accurately predict protein fold stability and binding affinity effects upon mutation.

    Directory of Open Access Journals (Sweden)

    Niklas Berliner

    Full Text Available Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases.

  14. Towards Accurate Prediction of Unbalance Response, Oil Whirl and Oil Whip of Flexible Rotors Supported by Hydrodynamic Bearings

    Directory of Open Access Journals (Sweden)

    Rob Eling

    2016-09-01

    Full Text Available Journal bearings are used to support rotors in a wide range of applications. In order to ensure reliable operation, accurate analyses of these rotor-bearing systems are crucial. Coupled analysis of the rotor and the journal bearing is essential in the case that the rotor is flexible. The accuracy of prediction of the model at hand depends on its comprehensiveness. In this study, we construct three bearing models of increasing modeling comprehensiveness and use these to predict the response of two different rotor-bearing systems. The main goal is to evaluate the correlation with measurement data as a function of modeling comprehensiveness: 1D versus 2D pressure prediction, distributed versus lumped thermal model, Newtonian versus non-Newtonian fluid description and non-mass-conservative versus mass-conservative cavitation description. We conclude that all three models predict the existence of critical speeds and whirl for both rotor-bearing systems. However, the two more comprehensive models in general show better correlation with measurement data in terms of frequency and amplitude. Furthermore, we conclude that a thermal network model comprising temperature predictions of the bearing surroundings is essential to obtain accurate predictions. The results of this study aid in developing accurate and computationally-efficient models of flexible rotors supported by plain journal bearings.

  15. Accurate diffraction data integration by the EVAL15 profile prediction method : Application in chemical and biological crystallography

    NARCIS (Netherlands)

    Xian, X.

    2009-01-01

    Accurate integration of reflection intensities plays an essential role in structure determination of the crystallized compound. A new diffraction data integration method, EVAL15, is presented in this thesis. This method uses the principle of general impacts to predict ab inito three-dimensional

  16. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  17. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Directory of Open Access Journals (Sweden)

    Yong-Bi Fu

    2017-07-01

    Full Text Available Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  18. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding.

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  19. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  20. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    Science.gov (United States)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  1. Accurate prediction of retention in hydrophilic interaction chromatography by back calculation of high pressure liquid chromatography gradient profiles.

    Science.gov (United States)

    Wang, Nu; Boswell, Paul G

    2017-10-20

    Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase

  2. Dynamics of Flexible MLI-type Debris for Accurate Orbit Prediction

    Science.gov (United States)

    2014-09-01

    debris for accurate propagation under perturbations”, in Proceedings of 65th International Astronautical Congress (IAC 2014), Toronto, Canada , 2014...Surveillance Network ( SSN ) was able to detect more than 900 pieces of debris that were at risk to damage operational spacecraft. In February 10, 2009...created two large debris clouds and the SSN reported that 382 pieces of debris from Iridium 33 and 893 pieces from Cosmos 2251 were created, and

  3. Models of Affective Decision Making: How Do Feelings Predict Choice?

    Science.gov (United States)

    Charpentier, Caroline J; De Neve, Jan-Emmanuel; Li, Xinyi; Roiser, Jonathan P; Sharot, Tali

    2016-06-01

    Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision. © The Author(s) 2016.

  4. Accurate nonadiabatic quantum dynamics on the cheap: making the most of mean field theory with master equations.

    Science.gov (United States)

    Kelly, Aaron; Brackbill, Nora; Markland, Thomas E

    2015-03-07

    In this article, we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speed-ups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.

  5. Accurate nonadiabatic quantum dynamics on the cheap: Making the most of mean field theory with master equations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Aaron; Markland, Thomas E., E-mail: tmarkland@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Brackbill, Nora [Department of Physics, Stanford University, Stanford, California 94305 (United States)

    2015-03-07

    In this article, we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speed-ups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.

  6. Accurate microRNA target prediction correlates with protein repression levels

    Directory of Open Access Journals (Sweden)

    Simossis Victor A

    2009-09-01

    Full Text Available Abstract Background MicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease. Results DIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction. Conclusion Recently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at http://www.microrna.gr/microT

  7. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    Directory of Open Access Journals (Sweden)

    Jesse S. Jin

    2010-10-01

    Full Text Available Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  8. Do dual-route models accurately predict reading and spelling performance in individuals with acquired alexia and agraphia?

    Science.gov (United States)

    Rapcsak, Steven Z; Henry, Maya L; Teague, Sommer L; Carnahan, Susan D; Beeson, Pélagie M

    2007-06-18

    Coltheart and co-workers [Castles, A., Bates, T. C., & Coltheart, M. (2006). John Marshall and the developmental dyslexias. Aphasiology, 20, 871-892; Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204-256] have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper, we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult neurological patients with acquired alexia and agraphia. These findings provide empirical support for dual-route theories of written language processing.

  9. NESmapper: accurate prediction of leucine-rich nuclear export signals using activity-based profiles.

    Directory of Open Access Journals (Sweden)

    Shunichi Kosugi

    2014-09-01

    Full Text Available The nuclear export of proteins is regulated largely through the exportin/CRM1 pathway, which involves the specific recognition of leucine-rich nuclear export signals (NESs in the cargo proteins, and modulates nuclear-cytoplasmic protein shuttling by antagonizing the nuclear import activity mediated by importins and the nuclear import signal (NLS. Although the prediction of NESs can help to define proteins that undergo regulated nuclear export, current methods of predicting NESs, including computational tools and consensus-sequence-based searches, have limited accuracy, especially in terms of their specificity. We found that each residue within an NES largely contributes independently and additively to the entire nuclear export activity. We created activity-based profiles of all classes of NESs with a comprehensive mutational analysis in mammalian cells. The profiles highlight a number of specific activity-affecting residues not only at the conserved hydrophobic positions but also in the linker and flanking regions. We then developed a computational tool, NESmapper, to predict NESs by using profiles that had been further optimized by training and combining the amino acid properties of the NES-flanking regions. This tool successfully reduced the considerable number of false positives, and the overall prediction accuracy was higher than that of other methods, including NESsential and Wregex. This profile-based prediction strategy is a reliable way to identify functional protein motifs. NESmapper is available at http://sourceforge.net/projects/nesmapper.

  10. How to make predictions about future infectious disease risks

    Science.gov (United States)

    Woolhouse, Mark

    2011-01-01

    Formal, quantitative approaches are now widely used to make predictions about the likelihood of an infectious disease outbreak, how the disease will spread, and how to control it. Several well-established methodologies are available, including risk factor analysis, risk modelling and dynamic modelling. Even so, predictive modelling is very much the ‘art of the possible’, which tends to drive research effort towards some areas and away from others which may be at least as important. Building on the undoubted success of quantitative modelling of the epidemiology and control of human and animal diseases such as AIDS, influenza, foot-and-mouth disease and BSE, attention needs to be paid to developing a more holistic framework that captures the role of the underlying drivers of disease risks, from demography and behaviour to land use and climate change. At the same time, there is still considerable room for improvement in how quantitative analyses and their outputs are communicated to policy makers and other stakeholders. A starting point would be generally accepted guidelines for ‘good practice’ for the development and the use of predictive models. PMID:21624924

  11. Accurate prediction of the ammonia probes of a variable proton-to-electron mass ratio

    Science.gov (United States)

    Owens, A.; Yurchenko, S. N.; Thiel, W.; Špirko, V.

    2015-07-01

    A comprehensive study of the mass sensitivity of the vibration-rotation-inversion transitions of 14NH3, 15NH3, 14ND3 and 15ND3 is carried out variationally using the TROVE approach. Variational calculations are robust and accurate, offering a new way to compute sensitivity coefficients. Particular attention is paid to the Δk = ±3 transitions between the accidentally coinciding rotation-inversion energy levels of the ν2 = 0+, 0-, 1+ and 1- states, and the inversion transitions in the ν4 = 1 state affected by the `giant' l-type doubling effect. These transitions exhibit highly anomalous sensitivities, thus appearing as promising probes of a possible cosmological variation of the proton-to-electron mass ratio μ. Moreover, a simultaneous comparison of the calculated sensitivities reveals a sizeable isotopic dependence which could aid an exclusive ammonia detection.

  12. Are predictive equations for estimating resting energy expenditure accurate in Asian Indian male weightlifters?

    Directory of Open Access Journals (Sweden)

    Mini Joseph

    2017-01-01

    Full Text Available Background: The accuracy of existing predictive equations to determine the resting energy expenditure (REE of professional weightlifters remains scarcely studied. Our study aimed at assessing the REE of male Asian Indian weightlifters with indirect calorimetry and to compare the measured REE (mREE with published equations. A new equation using potential anthropometric variables to predict REE was also evaluated. Materials and Methods: REE was measured on 30 male professional weightlifters aged between 17 and 28 years using indirect calorimetry and compared with the eight formulas predicted by Harris–Benedicts, Mifflin-St. Jeor, FAO/WHO/UNU, ICMR, Cunninghams, Owen, Katch-McArdle, and Nelson. Pearson correlation coefficient, intraclass correlation coefficient, and multiple linear regression analysis were carried out to study the agreement between the different methods, association with anthropometric variables, and to formulate a new prediction equation for this population. Results: Pearson correlation coefficients between mREE and the anthropometric variables showed positive significance with suprailiac skinfold thickness, lean body mass (LBM, waist circumference, hip circumference, bone mineral mass, and body mass. All eight predictive equations underestimated the REE of the weightlifters when compared with the mREE. The highest mean difference was 636 kcal/day (Owen, 1986 and the lowest difference was 375 kcal/day (Cunninghams, 1980. Multiple linear regression done stepwise showed that LBM was the only significant determinant of REE in this group of sportspersons. A new equation using LBM as the independent variable for calculating REE was computed. REE for weightlifters = −164.065 + 0.039 (LBM (confidence interval −1122.984, 794.854]. This new equation reduced the mean difference with mREE by 2.36 + 369.15 kcal/day (standard error = 67.40. Conclusion: The significant finding of this study was that all the prediction equations

  13. A machine learning approach to the accurate prediction of monitor units for a compact proton machine.

    Science.gov (United States)

    Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu

    2018-05-01

    Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution

  14. Safe surgery: how accurate are we at predicting intra-operative blood loss?

    LENUS (Irish Health Repository)

    2012-02-01

    Introduction Preoperative estimation of intra-operative blood loss by both anaesthetist and operating surgeon is a criterion of the World Health Organization\\'s surgical safety checklist. The checklist requires specific preoperative planning when anticipated blood loss is greater than 500 mL. The aim of this study was to assess the accuracy of surgeons and anaesthetists at predicting intra-operative blood loss. Methods A 6-week prospective study of intermediate and major operations in an academic medical centre was performed. An independent observer interviewed surgical and anaesthetic consultants and registrars, preoperatively asking each to predict expected blood loss in millilitre. Intra-operative blood loss was measured and compared with these predictions. Parameters including the use of anticoagulation and anti-platelet therapy as well as intra-operative hypothermia and hypotension were recorded. Results One hundred sixty-eight operations were included in the study, including 142 elective and 26 emergency operations. Blood loss was predicted to within 500 mL of measured blood loss in 89% of cases. Consultant surgeons tended to underestimate blood loss, doing so in 43% of all cases, while consultant anaesthetists were more likely to overestimate (60% of all operations). Twelve patients (7%) had underestimation of blood loss of more than 500 mL by both surgeon and anaesthetist. Thirty per cent (n = 6\\/20) of patients requiring transfusion of a blood product within 24 hours of surgery had blood loss underestimated by more than 500 mL by both surgeon and anaesthetist. There was no significant difference in prediction between patients on anti-platelet or anticoagulation therapy preoperatively and those not on the said therapies. Conclusion Predicted intra-operative blood loss was within 500 mL of measured blood loss in 89% of operations. In 30% of patients who ultimately receive a blood transfusion, both the surgeon and anaesthetist significantly underestimate

  15. Fast and accurate covalent bond predictions using perturbation theory in chemical space

    Science.gov (United States)

    Chang, Kuang-Yu; von Lilienfeld, Anatole

    I will discuss the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among systems of different chemical composition. We have investigated single, double, and triple bonds occurring in small sets of iso-valence-electronic molecular species with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order estimates of covalent bonding potentials can achieve chemical accuracy (within 1 kcal/mol) if the alchemical interpolation is vertical (fixed geometry) among chemical elements from third and fourth row of the periodic table. When applied to nonbonded systems of molecular dimers or solids such as III-V semiconductors, alanates, alkali halides, and transition metals, similar observations hold, enabling rapid predictions of van der Waals energies, defect energies, band-structures, crystal structures, and lattice constants.

  16. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  17. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  18. Meta-analytic approach to the accurate prediction of secreted virulence effectors in gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Sato Yoshiharu

    2011-11-01

    Full Text Available Abstract Background Many pathogens use a type III secretion system to translocate virulence proteins (called effectors in order to adapt to the host environment. To date, many prediction tools for effector identification have been developed. However, these tools are insufficiently accurate for producing a list of putative effectors that can be applied directly for labor-intensive experimental verification. This also suggests that important features of effectors have yet to be fully characterized. Results In this study, we have constructed an accurate approach to predicting secreted virulence effectors from Gram-negative bacteria. This consists of a support vector machine-based discriminant analysis followed by a simple criteria-based filtering. The accuracy was assessed by estimating the average number of true positives in the top-20 ranking in the genome-wide screening. In the validation, 10 sets of 20 training and 20 testing examples were randomly selected from 40 known effectors of Salmonella enterica serovar Typhimurium LT2. On average, the SVM portion of our system predicted 9.7 true positives from 20 testing examples in the top-20 of the prediction. Removal of the N-terminal instability, codon adaptation index and ProtParam indices decreased the score to 7.6, 8.9 and 7.9, respectively. These discrimination features suggested that the following characteristics of effectors had been uncovered: unstable N-terminus, non-optimal codon usage, hydrophilic, and less aliphathic. The secondary filtering process represented by coexpression analysis and domain distribution analysis further refined the average true positive counts to 12.3. We further confirmed that our system can correctly predict known effectors of P. syringae DC3000, strongly indicating its feasibility. Conclusions We have successfully developed an accurate prediction system for screening effectors on a genome-wide scale. We confirmed the accuracy of our system by external validation

  19. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  20. Making oneself predictable: Reduced temporal variability facilitates joint action coordination

    DEFF Research Database (Denmark)

    Vesper, Cordula; van der Wel, Robrecht; Knoblich, Günther

    2011-01-01

    Performing joint actions often requires precise temporal coordination of individual actions. The present study investigated how people coordinate their actions at discrete points in time when continuous or rhythmic information about others’ actions is not available. In particular, we tested...... the hypothesis that making oneself predictable is used as a coordination strategy. Pairs of participants were instructed to coordinate key presses in a two-choice reaction time task, either responding in synchrony (Experiments 1 and 2) or in close temporal succession (Experiment 3). Across all experiments, we...... found that coactors reduced the variability of their actions in the joint context compared with the same task performed individually. Correlation analyses indicated that the less variable the actions were, the better was interpersonal coordination. The relation between reduced variability and improved...

  1. Predictive performance of universal termination of resuscitation rules in an Asian community: are they accurate enough?

    Science.gov (United States)

    Chiang, Wen-Chu; Ko, Patrick Chow-In; Chang, Anna Marie; Liu, Sot Shih-Hung; Wang, Hui-Chih; Yang, Chih-Wei; Hsieh, Ming-Ju; Chen, Shey-Ying; Lai, Mei-Shu; Ma, Matthew Huei-Ming

    2015-04-01

    Prehospital termination of resuscitation (TOR) rules have not been widely validated outside of Western countries. This study evaluated the performance of TOR rules in an Asian metropolitan with a mixed-tier emergency medical service (EMS). We analysed the Utstein registry of adult, non-traumatic out-of-hospital cardiac arrests (OHCAs) in Taipei to test the performance of TOR rules for advanced life support (ALS) or basic life support (BLS) providers. ALS and BLS-TOR rules were tested in OHCAs among three subgroups: (1) resuscitated by ALS, (2) by BLS and (3) by mixed ALS and BLS. Outcome definition was in-hospital death. Sensitivity, specificity, positive predictive value (PPV), negative predictive value and decreased transport rate (DTR) among various provider combinations were calculated. Of the 3489 OHCAs included, 240 were resuscitated by ALS, 1727 by BLS and 1522 by ALS and BLS. Overall survival to hospital discharge was 197 patients (5.6%). Specificity and PPV of ALS-TOR and BLS-TOR for identifying death ranged from 70.7% to 81.8% and 95.1% to 98.1%, respectively. Applying the TOR rules would have a DTR of 34.2-63.9%. BLS rules had better predictive accuracy and DTR than ALS rules among all subgroups. Application of the ALS and BLS TOR rules would have decreased OHCA transported to the hospital, and BLS rules are reasonable as the universal criteria in a mixed-tier EMS. However, 1.9-4.9% of those who survived would be misclassified as non-survivors, raising concern of compromising patient safety for the implementation of the rules. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Can magnetic resonance imaging accurately predict concordant pain provocation during provocative disc injection?

    International Nuclear Information System (INIS)

    Kang, Chang Ho; Kim, Yun Hwan; Kim, Jung Hyuk; Chung, Kyoo Byung; Sung, Deuk Jae; Lee, Sang-Heon; Derby, Richard

    2009-01-01

    To correlate magnetic resonance (MR) image findings with pain response by provocation discography in patients with discogenic low back pain, with an emphasis on the combination analysis of a high intensity zone (HIZ) and disc contour abnormalities. Sixty-two patients (aged 17-68 years) with axial low back pain that was likely to be disc related underwent lumbar discography (178 discs tested). The MR images were evaluated for disc degeneration, disc contour abnormalities, HIZ, and endplate abnormalities. Based on the combination of an HIZ and disc contour abnormalities, four classes were determined: (1) normal or bulging disc without HIZ; (2) normal or bulging disc with HIZ; (3) disc protrusion without HIZ; (4) disc protrusion with HIZ. These MR image findings and a new combined MR classification were analyzed in the base of concordant pain determined by discography. Disc protrusion with HIZ [sensitivity 45.5%; specificity 97.8%; positive predictive value (PPV), 87.0%] correlated significantly with concordant pain provocation (P < 0.01). A normal or bulging disc with HIZ was not associated with reproduction of pain. Disc degeneration (sensitivity 95.4%; specificity 38.8%; PPV 33.9%), disc protrusion (sensitivity 68.2%; specificity 80.6%; PPV 53.6%), and HIZ (sensitivity 56.8%; specificity 83.6%; PPV 53.2%) were not helpful in the identification of a disc with concordant pain. The proposed MR classification is useful to predict a disc with concordant pain. Disc protrusion with HIZ on MR imaging predicted positive discography in patients with discogenic low back pain. (orig.)

  3. Does resident ranking during recruitment accurately predict subsequent performance as a surgical resident?

    Science.gov (United States)

    Fryer, Jonathan P; Corcoran, Noreen; George, Brian; Wang, Ed; Darosa, Debra

    2012-01-01

    While the primary goal of ranking applicants for surgical residency training positions is to identify the candidates who will subsequently perform best as surgical residents, the effectiveness of the ranking process has not been adequately studied. We evaluated our general surgery resident recruitment process between 2001 and 2011 inclusive, to determine if our recruitment ranking parameters effectively predicted subsequent resident performance. We identified 3 candidate ranking parameters (United States Medical Licensing Examination [USMLE] Step 1 score, unadjusted ranking score [URS], and final adjusted ranking [FAR]), and 4 resident performance parameters (American Board of Surgery In-Training Examination [ABSITE] score, PGY1 resident evaluation grade [REG], overall REG, and independent faculty rating ranking [IFRR]), and assessed whether the former were predictive of the latter. Analyses utilized Spearman correlation coefficient. We found that the URS, which is based on objective and criterion based parameters, was a better predictor of subsequent performance than the FAR, which is a modification of the URS based on subsequent determinations of the resident selection committee. USMLE score was a reliable predictor of ABSITE scores only. However, when we compared our worst residence performances with the performances of the other residents in this evaluation, the data did not produce convincing evidence that poor resident performances could be reliably predicted by any of the recruitment ranking parameters. Finally, stratifying candidates based on their rank range did not effectively define a ranking cut-off beyond which resident performance would drop off. Based on these findings, we recommend surgery programs may be better served by utilizing a more structured resident ranking process and that subsequent adjustments to the rank list generated by this process should be undertaken with caution. Copyright © 2012 Association of Program Directors in Surgery

  4. Microvascular remodelling in preeclampsia: quantifying capillary rarefaction accurately and independently predicts preeclampsia.

    Science.gov (United States)

    Antonios, Tarek F T; Nama, Vivek; Wang, Duolao; Manyonda, Isaac T

    2013-09-01

    Preeclampsia is a major cause of maternal and neonatal mortality and morbidity. The incidence of preeclampsia seems to be rising because of increased prevalence of predisposing disorders, such as essential hypertension, diabetes, and obesity, and there is increasing evidence to suggest widespread microcirculatory abnormalities before the onset of preeclampsia. We hypothesized that quantifying capillary rarefaction could be helpful in the clinical prediction of preeclampsia. We measured skin capillary density according to a well-validated protocol at 5 consecutive predetermined visits in 322 consecutive white women, of whom 16 subjects developed preeclampsia. We found that structural capillary rarefaction at 20-24 weeks of gestation yielded a sensitivity of 0.87 with a specificity of 0.50 at the cutoff of 2 capillaries/field with the area under the curve of the receiver operating characteristic value of 0.70, whereas capillary rarefaction at 27-32 weeks of gestation yielded a sensitivity of 0.75 and a higher specificity of 0.77 at the cutoff of 8 capillaries/field with area under the curve of the receiver operating characteristic value of 0.82. Combining capillary rarefaction with uterine artery Doppler pulsatility index increased the sensitivity and specificity of the prediction. Multivariable analysis shows that the odds of preeclampsia are increased in women with previous history of preeclampsia or chronic hypertension and in those with increased uterine artery Doppler pulsatility index, but the most powerful and independent predictor of preeclampsia was capillary rarefaction at 27-32 weeks. Quantifying structural rarefaction of skin capillaries in pregnancy is a potentially useful clinical marker for the prediction of preeclampsia.

  5. Accurate prediction of the dew points of acidic combustion gases by using an artificial neural network model

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Aminian, Ali

    2011-01-01

    This paper presents a new approach based on using an artificial neural network (ANN) model for predicting the acid dew points of the combustion gases in process and power plants. The most important acidic combustion gases namely, SO 3 , SO 2 , NO 2 , HCl and HBr are considered in this investigation. Proposed Network is trained using the Levenberg-Marquardt back propagation algorithm and the hyperbolic tangent sigmoid activation function is applied to calculate the output values of the neurons of the hidden layer. According to the network's training, validation and testing results, a three layer neural network with nine neurons in the hidden layer is selected as the best architecture for accurate prediction of the acidic combustion gases dew points over wide ranges of acid and moisture concentrations. The proposed neural network model can have significant application in predicting the condensation temperatures of different acid gases to mitigate the corrosion problems in stacks, pollution control devices and energy recovery systems.

  6. Can tritiated water-dilution space accurately predict total body water in chukar partridges

    International Nuclear Information System (INIS)

    Crum, B.G.; Williams, J.B.; Nagy, K.A.

    1985-01-01

    Total body water (TBW) volumes determined from the dilution space of injected tritiated water have consistently overestimated actual water volumes (determined by desiccation to constant mass) in reptiles and mammals, but results for birds are controversial. We investigated potential errors in both the dilution method and the desiccation method in an attempt to resolve this controversy. Tritiated water dilution yielded an accurate measurement of water mass in vitro. However, in vivo, this method yielded a 4.6% overestimate of the amount of water (3.1% of live body mass) in chukar partridges, apparently largely because of loss of tritium from body water to sites of dissociable hydrogens on body solids. An additional source of overestimation (approximately 2% of body mass) was loss of tritium to the solids in blood samples during distillation of blood to obtain pure water for tritium analysis. Measuring tritium activity in plasma samples avoided this problem but required measurement of, and correction for, the dry matter content in plasma. Desiccation to constant mass by lyophilization or oven-drying also overestimated the amount of water actually in the bodies of chukar partridges by 1.4% of body mass, because these values included water adsorbed onto the outside of feathers. When desiccating defeathered carcasses, oven-drying at 70 degrees C yielded TBW values identical to those obtained from lyophilization, but TBW was overestimated (0.5% of body mass) by drying at 100 degrees C due to loss of organic substances as well as water

  7. Surface temperatures in New York City: Geospatial data enables the accurate prediction of radiative heat transfer.

    Science.gov (United States)

    Ghandehari, Masoud; Emig, Thorsten; Aghamohamadnia, Milad

    2018-02-02

    Despite decades of research seeking to derive the urban energy budget, the dynamics of thermal exchange in the densely constructed environment is not yet well understood. Using New York City as a study site, we present a novel hybrid experimental-computational approach for a better understanding of the radiative heat transfer in complex urban environments. The aim of this work is to contribute to the calculation of the urban energy budget, particularly the stored energy. We will focus our attention on surface thermal radiation. Improved understanding of urban thermodynamics incorporating the interaction of various bodies, particularly in high rise cities, will have implications on energy conservation at the building scale, and for human health and comfort at the urban scale. The platform presented is based on longwave hyperspectral imaging of nearly 100 blocks of Manhattan, in addition to a geospatial radiosity model that describes the collective radiative heat exchange between multiple buildings. Despite assumptions in surface emissivity and thermal conductivity of buildings walls, the close comparison of temperatures derived from measurements and computations is promising. Results imply that the presented geospatial thermodynamic model of urban structures can enable accurate and high resolution analysis of instantaneous urban surface temperatures.

  8. Improving the description of sunglint for accurate prediction of remotely sensed radiances

    Energy Technology Data Exchange (ETDEWEB)

    Ottaviani, Matteo [Light and Life Laboratory, Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail: mottavia@stevens.edu; Spurr, Robert [RT Solutions Inc., 9 Channing Street, Cambridge, MA 02138 (United States); Stamnes, Knut; Li Wei [Light and Life Laboratory, Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States); Su Wenying [Science Systems and Applications Inc., 1 Enterprise Parkway, Hampton, VA 23666 (United States); Wiscombe, Warren [NASA GSFC, Greenbelt, MD 20771 (United States)

    2008-09-15

    The bidirectional reflection distribution function (BRDF) of the ocean is a critical boundary condition for radiative transfer calculations in the coupled atmosphere-ocean system. Existing models express the extent of the glint-contaminated region and its contribution to the radiance essentially as a function of the wind speed. An accurate treatment of the glint contribution and its propagation in the atmosphere would improve current correction schemes and hence rescue a significant portion of data presently discarded as 'glint contaminated'. In current satellite imagery, a correction to the sensor-measured radiances is limited to the region at the edge of the glint, where the contribution is below a certain threshold. This correction assumes the sunglint radiance to be directly transmitted through the atmosphere. To quantify the error introduced by this approximation we employ a radiative transfer code that allows for a user-specified BRDF at the atmosphere-ocean interface and rigorously accounts for multiple scattering. We show that the errors incurred by ignoring multiple scattering are very significant and typically lie in the range 10-90%. Multiple reflections and shadowing at the surface can also be accounted for, and we illustrate the importance of such processes at grazing geometries.

  9. High-order accurate numerical algorithm for three-dimensional transport prediction

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W [Savannah River Lab., Aiken, SC; Baker, A J

    1980-01-01

    The numerical solution of the three-dimensional pollutant transport equation is obtained with the method of fractional steps; advection is solved by the method of moments and diffusion by cubic splines. Topography and variable mesh spacing are accounted for with coordinate transformations. First estimate wind fields are obtained by interpolation to grid points surrounding specific data locations. Numerical results agree with results obtained from analytical Gaussian plume relations for ideal conditions. The numerical model is used to simulate the transport of tritium released from the Savannah River Plant on 2 May 1974. Predicted ground level air concentration 56 km from the release point is within 38% of the experimentally measured value.

  10. Developing Metamodels for Fast and Accurate Prediction of the Draping of Physical Surfaces

    DEFF Research Database (Denmark)

    Christensen, Esben Toke; Forrester, AIJ.; Lund, Erik

    2018-01-01

    In this paper, the use of methods from the meta- or surrogate modeling literature, for building models predicting the draping of physical surfaces, is examined. An example application concerning modeling of the behavior of a variable shape mold is treated. Four different methods are considered...... and local variants, are compared in terms of accuracy and numerical efficiency on data sets of different sizes for the treated application. It is shown that the POD-based methods are vastly superior to models based on kriging alone, and that the use of a difference model structure is advantageous...

  11. Accurate cut-offs for predicting endoscopic activity and mucosal healing in Crohn's disease with fecal calprotectin

    Directory of Open Access Journals (Sweden)

    Juan María Vázquez-Morón

    Full Text Available Background: Fecal biomarkers, especially fecal calprotectin, are useful for predicting endoscopic activity in Crohn's disease; however, the cut-off point remains unclear. The aim of this paper was to analyze whether faecal calprotectin and M2 pyruvate kinase are good tools for generating highly accurate scores for the prediction of the state of endoscopic activity and mucosal healing. Methods: The simple endoscopic score for Crohn's disease and the Crohn's disease activity index was calculated for 71 patients diagnosed with Crohn's. Fecal calprotectin and M2-PK were measured by the enzyme-linked immunosorbent assay test. Results: A fecal calprotectin cut-off concentration of ≥ 170 µg/g (sensitivity 77.6%, specificity 95.5% and likelihood ratio +17.06 predicts a high probability of endoscopic activity, and a fecal calprotectin cut-off of ≤ 71 µg/g (sensitivity 95.9%, specificity 52.3% and likelihood ratio -0.08 predicts a high probability of mucosal healing. Three clinical groups were identified according to the data obtained: endoscopic activity (calprotectin ≥ 170, mucosal healing (calprotectin ≤ 71 and uncertainty (71 > calprotectin < 170, with significant differences in endoscopic values (F = 26.407, p < 0.01. Clinical activity or remission modified the probabilities of presenting endoscopic activity (100% vs 89% or mucosal healing (75% vs 87% in the diagnostic scores generated. M2-PK was insufficiently accurate to determine scores. Conclusions: The highly accurate scores for fecal calprotectin provide a useful tool for interpreting the probabilities of presenting endoscopic activity or mucosal healing, and are valuable in the specific clinical context.

  12. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach.

    Directory of Open Access Journals (Sweden)

    Zhiheng Wang

    Full Text Available The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database.The DisoMCS is available at http://cal.tongji.edu.cn/disorder/.

  13. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences

    KAUST Repository

    Chen, Peng

    2013-07-23

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.

  14. Neural network and SVM classifiers accurately predict lipid binding proteins, irrespective of sequence homology.

    Science.gov (United States)

    Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil

    2014-09-07

    Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences

    KAUST Repository

    Chen, Peng; Li, Jinyan; Limsoon, Wong; Kuwahara, Hiroyuki; Huang, Jianhua Z.; Gao, Xin

    2013-01-01

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.

  16. Deep Learning Accurately Predicts Estrogen Receptor Status in Breast Cancer Metabolomics Data.

    Science.gov (United States)

    Alakwaa, Fadhl M; Chaudhary, Kumardeep; Garmire, Lana X

    2018-01-05

    Metabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+), and 67 negative estrogen receptor (ER-) to test the accuracies of feed-forward networks, a deep learning (DL) framework, as well as six widely used machine learning models, namely random forest (RF), support vector machines (SVM), recursive partitioning and regression trees (RPART), linear discriminant analysis (LDA), prediction analysis for microarrays (PAM), and generalized boosted models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER- patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value learning methods. Among them, protein digestion and absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accuracy (AUC = 0.93) and better revelation of disease biology. We encourage the adoption of feed-forward networks based deep learning method in the metabolomics research community for classification.

  17. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences.

    Science.gov (United States)

    Chen, Peng; Li, Jinyan; Wong, Limsoon; Kuwahara, Hiroyuki; Huang, Jianhua Z; Gao, Xin

    2013-08-01

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. Copyright © 2013 Wiley Periodicals, Inc.

  18. Size matters. The width and location of a ureteral stone accurately predict the chance of spontaneous passage

    Energy Technology Data Exchange (ETDEWEB)

    Jendeberg, Johan; Geijer, Haakan; Alshamari, Muhammed; Liden, Mats [Oerebro University Hospital, Department of Radiology, Faculty of Medicine and Health, Oerebro (Sweden); Cierzniak, Bartosz [Oerebro University, Department of Surgery, Faculty of Medicine and Health, Oerebro (Sweden)

    2017-11-15

    To determine how to most accurately predict the chance of spontaneous passage of a ureteral stone using information in the diagnostic non-enhanced computed tomography (NECT) and to create predictive models with smaller stone size intervals than previously possible. Retrospectively 392 consecutive patients with ureteric stone on NECT were included. Three radiologists independently measured the stone size. Stone location, side, hydronephrosis, CRP, medical expulsion therapy (MET) and all follow-up radiology until stone expulsion or 26 weeks were recorded. Logistic regressions were performed with spontaneous stone passage in 4 weeks and 20 weeks as the dependent variable. The spontaneous passage rate in 20 weeks was 312 out of 392 stones, 98% in 0-2 mm, 98% in 3 mm, 81% in 4 mm, 65% in 5 mm, 33% in 6 mm and 9% in ≥6.5 mm wide stones. The stone size and location predicted spontaneous ureteric stone passage. The side and the grade of hydronephrosis only predicted stone passage in specific subgroups. Spontaneous passage of a ureteral stone can be predicted with high accuracy with the information available in the NECT. We present a prediction method based on stone size and location. (orig.)

  19. ABC/2 Method Does not Accurately Predict Cerebral Arteriovenous Malformation Volume.

    Science.gov (United States)

    Roark, Christopher; Vadlamudi, Venu; Chaudhary, Neeraj; Gemmete, Joseph J; Seinfeld, Joshua; Thompson, B Gregory; Pandey, Aditya S

    2018-02-01

    Stereotactic radiosurgery (SRS) is a treatment option for cerebral arteriovenous malformations (AVMs) to prevent intracranial hemorrhage. The decision to proceed with SRS is usually based on calculated nidal volume. Physicians commonly use the ABC/2 formula, based on digital subtraction angiography (DSA), when counseling patients for SRS. To determine whether AVM volume calculated using the ABC/2 method on DSA is accurate when compared to the exact volume calculated from thin-cut axial sections used for SRS planning. Retrospective search of neurovascular database to identify AVMs treated with SRS from 1995 to 2015. Maximum nidal diameters in orthogonal planes on DSA images were recorded to determine volume using ABC/2 formula. Nidal target volume was extracted from operative reports of SRS. Volumes were then compared using descriptive statistics and paired t-tests. Ninety intracranial AVMs were identified. Median volume was 4.96 cm3 [interquartile range (IQR) 1.79-8.85] with SRS planning methods and 6.07 cm3 (IQR 1.3-13.6) with ABC/2 methodology. Moderate correlation was seen between SRS and ABC/2 (r = 0.662; P ABC/2 (t = -3.2; P = .002). When AVMs were dichotomized based on ABC/2 volume, significant differences remained (t = 3.1, P = .003 for ABC/2 volume ABC/2 volume > 7 cm3). The ABC/2 method overestimates cerebral AVM volume when compared to volumetric analysis from SRS planning software. For AVMs > 7 cm3, the overestimation is even greater. SRS planning techniques were also significantly different than values derived from equations for cones and cylinders. Copyright © 2017 by the Congress of Neurological Surgeons

  20. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  1. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

    Science.gov (United States)

    Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

    2013-06-01

    This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

  2. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    Science.gov (United States)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  3. nuMap: a web platform for accurate prediction of nucleosome positioning.

    Science.gov (United States)

    Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng

    2014-10-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. Copyright © 2014 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  4. nuMap: A Web Platform for Accurate Prediction of Nucleosome Positioning

    Directory of Open Access Journals (Sweden)

    Bader A. Alharbi

    2014-10-01

    Full Text Available Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.

  5. How Accurately Do Consecutive Cohort Audits Predict Phase III Multisite Clinical Trial Recruitment in Palliative Care?

    Science.gov (United States)

    McCaffrey, Nikki; Fazekas, Belinda; Cutri, Natalie; Currow, David C

    2016-04-01

    Audits have been proposed for estimating possible recruitment rates to randomized controlled trials (RCTs), but few studies have compared audit data with subsequent recruitment rates. To compare the accuracy of estimates of potential recruitment from a retrospective consecutive cohort audit of actual participating sites and recruitment to four Phase III multisite clinical RCTs. The proportion of potentially eligible study participants estimated from an inpatient chart review of people with life-limiting illnesses referred to six Australian specialist palliative care services was compared with recruitment data extracted from study prescreening information from three sites that participated fully in four Palliative Care Clinical Studies Collaborative RCTs. The predominant reasons for ineligibility in the audit and RCTs were analyzed. The audit overestimated the proportion of people referred to the palliative care services who could participate in the RCTs (pain 17.7% vs. 1.2%, delirium 5.8% vs. 0.6%, anorexia 5.1% vs. 0.8%, and bowel obstruction 2.8% vs. 0.5%). Approximately 2% of the referral base was potentially eligible for these effectiveness studies. Ineligibility for general criteria (language, cognition, and geographic proximity) varied between studies, whereas the reasons for exclusion were similar between the audit and pain and anorexia studies but not for delirium or bowel obstruction. The retrospective consecutive case note audit in participating sites did not predict realistic recruitment rates, mostly underestimating the impact of study-specific inclusion criteria. These findings have implications for the applicability of the results of RCTs. Prospective pilot studies are more likely to predict actual recruitment. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  6. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    Science.gov (United States)

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in 3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A New Approach for Accurate Prediction of Liquid Loading of Directional Gas Wells in Transition Flow or Turbulent Flow

    Directory of Open Access Journals (Sweden)

    Ruiqing Ming

    2017-01-01

    Full Text Available Current common models for calculating continuous liquid-carrying critical gas velocity are established based on vertical wells and laminar flow without considering the influence of deviation angle and Reynolds number on liquid-carrying. With the increase of the directional well in transition flow or turbulent flow, the current common models cannot accurately predict the critical gas velocity of these wells. So we built a new model to predict continuous liquid-carrying critical gas velocity for directional well in transition flow or turbulent flow. It is shown from sensitivity analysis that the correction coefficient is mainly influenced by Reynolds number and deviation angle. With the increase of Reynolds number, the critical liquid-carrying gas velocity increases first and then decreases. And with the increase of deviation angle, the critical liquid-carrying gas velocity gradually decreases. It is indicated from the case calculation analysis that the calculation error of this new model is less than 10%, where accuracy is much higher than those of current common models. It is demonstrated that the continuous liquid-carrying critical gas velocity of directional well in transition flow or turbulent flow can be predicted accurately by using this new model.

  8. Using Bronson Equation to Accurately Predict the Dog Brain Weight Based on Body Weight Parameter

    Directory of Open Access Journals (Sweden)

    L. Miguel Carreira

    2016-12-01

    Full Text Available The study used 69 brains (n = 69 from adult dog cadavers, divided by their skull type into three groups, brachi (B, dolicho (D and mesaticephalic (M (n = 23 each, and aimed: (1 to determine whether the Bronson equation may be applied, without reservation, to estimate brain weight (BW in brachy (B, dolicho (D, and mesaticephalic (M dog breeds; and (2 to evaluate which breeds are more closely related to each other in an evolutionary scenario. All subjects were identified by sex, age, breed, and body weight (bw. An oscillating saw was used for a circumferential craniotomy to open the skulls; the brains were removed and weighed using a digital scale. For statistical analysis, p-values < 0.05 were considered significant. The work demonstrated a strong relationship between the observed and predicted BW by using the Bronson equation. It was possible to hypothesize that groups B and D present a greater encephalization level than M breeds, that B and D dog breeds are more closely related to each other than to M, and from the three groups, the D individuals presented the highest brain mass mean.

  9. Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions

    Science.gov (United States)

    Lachut, M.; Bennett, J.

    2016-09-01

    The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.

  10. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    Science.gov (United States)

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  11. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries

  12. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  13. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery.

    Science.gov (United States)

    Yu, Victoria Y; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A; Sheng, Ke

    2015-11-01

    errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  14. Toward accurate prediction of potential energy surfaces and the spectral density of hydrogen bonded systems

    International Nuclear Information System (INIS)

    Rekik, Najeh

    2014-01-01

    Despite the considerable progress made in quantum theory and computational methods, detailed descriptions of the potential energy surfaces of hydrogen-bonded systems have not yet been achieved. In addition, the hydrogen bond (H-bond) itself is still so poorly understood at the fundamental level that it remains unclear exactly what geometry constitutes a “real” H-bond. Therefore, in order to investigate features essential for hydrogen bonded complexes, a simple, efficient, and general method for calculating matrix elements of vibrational operators capable of describing the stretching modes and the H-bond bridges of hydrogen-bonded systems is proposed. The derived matrix elements are simple and computationally easy to evaluate, which makes the method suitable for vibrational studies of multiple-well potentials. The method is illustrated by obtaining potential energy surfaces for a number of two-dimensional systems with repulsive potentials chosen to be in Gaussian form for the stretching mode and of the Morse-type for the H-bond bridge dynamics. The forms of potential energy surfaces of weak and strong hydrogen bonds are analyzed by varying the asymmetry of the Gaussian potential. Moreover, the choice and applicability of the selected potential for the stretching mode and comparison with other potentials used in the area of hydrogen bond research are discussed. The approach for the determination of spectral density has been constructed in the framework of the linear response theory for which spectral density is obtained by Fourier transform of the autocorrelation function of the dipole moment operator of the fast mode. The approach involves anharmonic coupling between the high frequency stretching vibration (double well potential) and low-frequency donor-acceptor stretching mode (Morse potential) as well as the electrical anharmonicity of the dipole moment operator of the fast mode. A direct relaxation mechanism is incorporated through a time decaying exponential

  15. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  16. Mini-Mental Status Examination: a short form of MMSE was as accurate as the original MMSE in predicting dementia

    DEFF Research Database (Denmark)

    Schultz-Larsen, Kirsten; Lomholt, Rikke Kirstine; Kreiner, Svend

    2006-01-01

    .4%), and positive predictive value (71.0%) but equal area under the receiver operating characteristic curve. Cross-validation on follow-up data confirmed the results. CONCLUSION: A short, valid MMSE, which is as sensitive and specific as the original MMSE for the screening of cognitive impairments and dementia......OBJECTIVES: This study assesses the properties of the Mini-Mental State Examination (MMSE) with the purpose of improving the efficiencies of the methods of screening for cognitive impairment and dementia. A specific purpose was to determine whether an abbreviated version would be as accurate...... is attractive for research and clinical practice, particularly if predictive power can be enhanced by combining the short MMSE with neuropsychological tests or informant reports....

  17. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    Science.gov (United States)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  18. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Decision Making in Reference to Model of Marketing Predictive Analytics – Theory and Practice

    Directory of Open Access Journals (Sweden)

    Piotr Tarka

    2014-03-01

    understand the inner working of the model. Originality: Authors describe the importance of analytics which enhance the decisions that the company makes as it executes strategies and plans, so that the company can be more effective and achieve better results. The key factor that enables to execute marketing strategies accurately and build competitive advantage in the future includes predictive modeling. The ability to predict probable futures allows us to shape the future, rather than merely survive whatever it brings.

  20. Predicted osteotomy planes are accurate when using patient-specific instrumentation for total knee arthroplasty in cadavers: a descriptive analysis.

    Science.gov (United States)

    Kievit, A J; Dobbe, J G G; Streekstra, G J; Blankevoort, L; Schafroth, M U

    2018-06-01

    Malalignment of implants is a major source of failure during total knee arthroplasty. To achieve more accurate 3D planning and execution of the osteotomy cuts during surgery, the Signature (Biomet, Warsaw) patient-specific instrumentation (PSI) was used to produce pin guides for the positioning of the osteotomy blocks by means of computer-aided manufacture based on CT scan images. The research question of this study is: what is the transfer accuracy of osteotomy planes predicted by the Signature PSI system for preoperative 3D planning and intraoperative block-guided pin placement to perform total knee arthroplasty procedures? The transfer accuracy achieved by using the Signature PSI system was evaluated by comparing the osteotomy planes predicted preoperatively with the osteotomy planes seen intraoperatively in human cadaveric legs. Outcomes were measured in terms of translational and rotational errors (varus, valgus, flexion, extension and axial rotation) for both tibia and femur osteotomies. Average translational errors between the osteotomy planes predicted using the Signature system and the actual osteotomy planes achieved was 0.8 mm (± 0.5 mm) for the tibia and 0.7 mm (± 4.0 mm) for the femur. Average rotational errors in relation to predicted and achieved osteotomy planes were 0.1° (± 1.2°) of varus and 0.4° (± 1.7°) of anterior slope (extension) for the tibia, and 2.8° (± 2.0°) of varus and 0.9° (± 2.7°) of flexion and 1.4° (± 2.2°) of external rotation for the femur. The similarity between osteotomy planes predicted using the Signature system and osteotomy planes actually achieved was excellent for the tibia although some discrepancies were seen for the femur. The use of 3D system techniques in TKA surgery can provide accurate intraoperative guidance, especially for patients with deformed bone, tailored to individual patients and ensure better placement of the implant.

  1. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  2. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  3. A NEW CLINICAL PREDICTION CRITERION ACCURATELY DETERMINES A SUBSET OF PATIENTS WITH BILATERAL PRIMARY ALDOSTERONISM BEFORE ADRENAL VENOUS SAMPLING.

    Science.gov (United States)

    Kocjan, Tomaz; Janez, Andrej; Stankovic, Milenko; Vidmar, Gaj; Jensterle, Mojca

    2016-05-01

    Adrenal venous sampling (AVS) is the only available method to distinguish bilateral from unilateral primary aldosteronism (PA). AVS has several drawbacks, so it is reasonable to avoid this procedure when the results would not affect clinical management. Our objective was to identify a clinical criterion that can reliably predict nonlateralized AVS as a surrogate for bilateral PA that is not treated surgically. A retrospective diagnostic cross-sectional study conducted at Slovenian national endocrine referral center included 69 consecutive patients (mean age 56 ± 8 years, 21 females) with PA who underwent AVS. PA was confirmed with the saline infusion test (SIT). AVS was performed sequentially during continuous adrenocorticotrophic hormone (ACTH) infusion. The main outcome measures were variables associated with nonlateralized AVS to derive a clinical prediction rule. Sixty-seven (97%) patients had a successful AVS and were included in the statistical analysis. A total of 39 (58%) patients had nonlateralized AVS. The combined criterion of serum potassium ≥3.5 mmol/L, post-SIT aldosterone AVS. The best overall classification accuracy (50/67 = 75%) was achieved using the post-SIT aldosterone level AVS. Our clinical prediction criterion appears to accurately determine a subset of patients with bilateral PA who could avoid unnecessary AVS and immediately commence with medical treatment.

  4. Discharge destination following lower limb fracture: development of a prediction model to assist with decision making.

    Science.gov (United States)

    Kimmel, Lara A; Holland, Anne E; Edwards, Elton R; Cameron, Peter A; De Steiger, Richard; Page, Richard S; Gabbe, Belinda

    2012-06-01

    Accurate prediction of the likelihood of discharge to inpatient rehabilitation following lower limb fracture made on admission to hospital may assist patient discharge planning and decrease the burden on the hospital system caused by delays in decision making. To develop a prognostic model for discharge to inpatient rehabilitation. Isolated lower extremity fracture cases (excluding fractured neck of femur), captured by the Victorian Orthopaedic Trauma Outcomes Registry (VOTOR), were extracted for analysis. A training data set was created for model development and validation data set for evaluation. A multivariable logistic regression model was developed based on patient and injury characteristics. Models were assessed using measures of discrimination (C-statistic) and calibration (Hosmer-Lemeshow (H-L) statistic). A total of 1429 patients met the inclusion criteria and were randomly split into training and test data sets. Increasing age, more proximal fracture type, compensation or private fund source for the admission, metropolitan location of residence, not working prior to injury and having a self-reported pre-injury disability were included in the final prediction model. The C-statistic for the model was 0.92 (95% confidence interval (CI) 0.88, 0.95) with an H-L statistic of χ(2)=11.62, p=0.17. For the test data set, the C-statistic was 0.86 (95% CI 0.83, 0.90) with an H-L statistic of χ(2)=37.98, plower limb fracture was developed with excellent discrimination although the calibration was reduced in the test data set. This model requires prospective testing but could form an integral part of decision making in regards to discharge disposition to facilitate timely and accurate referral to rehabilitation and optimise resource allocation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    Energy Technology Data Exchange (ETDEWEB)

    Bok, H.-H.; Kim, S.N.; Suh, D.W. [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Barlat, F., E-mail: f.barlat@postech.ac.kr [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Lee, M.-G., E-mail: myounglee@korea.ac.kr [Department of Materials Science and Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul (Korea, Republic of)

    2015-02-25

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments.

  6. Accurate density functional prediction of molecular electron affinity with the scaling corrected Kohn–Sham frontier orbital energies

    Science.gov (United States)

    Zhang, DaDi; Yang, Xiaolong; Zheng, Xiao; Yang, Weitao

    2018-04-01

    Electron affinity (EA) is the energy released when an additional electron is attached to an atom or a molecule. EA is a fundamental thermochemical property, and it is closely pertinent to other important properties such as electronegativity and hardness. However, accurate prediction of EA is difficult with density functional theory methods. The somewhat large error of the calculated EAs originates mainly from the intrinsic delocalisation error associated with the approximate exchange-correlation functional. In this work, we employ a previously developed non-empirical global scaling correction approach, which explicitly imposes the Perdew-Parr-Levy-Balduz condition to the approximate functional, and achieve a substantially improved accuracy for the calculated EAs. In our approach, the EA is given by the scaling corrected Kohn-Sham lowest unoccupied molecular orbital energy of the neutral molecule, without the need to carry out the self-consistent-field calculation for the anion.

  7. Effect of computational grid on accurate prediction of a wind turbine rotor using delayed detached-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)

    2017-05-15

    The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.

  8. Development of a method to accurately calculate the Dpb and quickly predict the strength of a chemical bond

    International Nuclear Information System (INIS)

    Du, Xia; Zhao, Dong-Xia; Yang, Zhong-Zhi

    2013-01-01

    Highlights: ► A method from new respect to characterize and measure the bond strength is proposed. ► We calculate the D pb of a series of various bonds to justify our approach. ► A quite good linear relationship of the D pb with the bond lengths for series of various bonds is shown. ► Take the prediction of strengths of C–H and N–H bonds for base pairs in DNA as a practical application of our method. - Abstract: A new approach to characterize and measure bond strength has been developed. First, we propose a method to accurately calculate the potential acting on an electron in a molecule (PAEM) at the saddle point along a chemical bond in situ, denoted by D pb . Then, a direct method to quickly evaluate bond strength is established. We choose some familiar molecules as models for benchmarking this method. As a practical application, the D pb of base pairs in DNA along C–H and N–H bonds are obtained for the first time. All results show that C 7 –H of A–T and C 8 –H of G–C are the relatively weak bonds that are the injured positions in DNA damage. The significance of this work is twofold: (i) A method is developed to calculate D pb of various sizable molecules in situ quickly and accurately; (ii) This work demonstrates the feasibility to quickly predict the bond strength in macromolecules

  9. A novel fibrosis index comprising a non-cholesterol sterol accurately predicts HCV-related liver cirrhosis.

    Directory of Open Access Journals (Sweden)

    Magdalena Ydreborg

    Full Text Available Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI in the paper, was based on the model: Log-odds (predicting cirrhosis = -12.17+ (age × 0.11 + (BMI (kg/m(2 × 0.23 + (D7-lathosterol (μg/100 mg cholesterol×(-0.013 + (Platelet count (x10(9/L × (-0.018 + (Prothrombin-INR × 3.69. The area under the ROC curve (AUROC for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96. The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98. In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.

  10. A Deep Learning Framework for Robust and Accurate Prediction of ncRNA-Protein Interactions Using Evolutionary Information.

    Science.gov (United States)

    Yi, Hai-Cheng; You, Zhu-Hong; Huang, De-Shuang; Li, Xiao; Jiang, Tong-Hai; Li, Li-Ping

    2018-06-01

    The interactions between non-coding RNAs (ncRNAs) and proteins play an important role in many biological processes, and their biological functions are primarily achieved by binding with a variety of proteins. High-throughput biological techniques are used to identify protein molecules bound with specific ncRNA, but they are usually expensive and time consuming. Deep learning provides a powerful solution to computationally predict RNA-protein interactions. In this work, we propose the RPI-SAN model by using the deep-learning stacked auto-encoder network to mine the hidden high-level features from RNA and protein sequences and feed them into a random forest (RF) model to predict ncRNA binding proteins. Stacked assembling is further used to improve the accuracy of the proposed method. Four benchmark datasets, including RPI2241, RPI488, RPI1807, and NPInter v2.0, were employed for the unbiased evaluation of five established prediction tools: RPI-Pred, IPMiner, RPISeq-RF, lncPro, and RPI-SAN. The experimental results show that our RPI-SAN model achieves much better performance than other methods, with accuracies of 90.77%, 89.7%, 96.1%, and 99.33%, respectively. It is anticipated that RPI-SAN can be used as an effective computational tool for future biomedical researches and can accurately predict the potential ncRNA-protein interacted pairs, which provides reliable guidance for biological research. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    Directory of Open Access Journals (Sweden)

    Wei Luo

    Full Text Available For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD outcomes (four NCDs and two major clinical risk factors, based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88 and those excluded from the development for use as a completely separated validation sample (median correlation 0.85, demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  12. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    Science.gov (United States)

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  13. Combining first-principles and data modeling for the accurate prediction of the refractive index of organic polymers

    Science.gov (United States)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    2018-06-01

    Organic materials with a high index of refraction (RI) are attracting considerable interest due to their potential application in optic and optoelectronic devices. However, most of these applications require an RI value of 1.7 or larger, while typical carbon-based polymers only exhibit values in the range of 1.3-1.5. This paper introduces an efficient computational protocol for the accurate prediction of RI values in polymers to facilitate in silico studies that can guide the discovery and design of next-generation high-RI materials. Our protocol is based on the Lorentz-Lorenz equation and is parametrized by the polarizability and number density values of a given candidate compound. In the proposed scheme, we compute the former using first-principles electronic structure theory and the latter using an approximation based on van der Waals volumes. The critical parameter in the number density approximation is the packing fraction of the bulk polymer, for which we have devised a machine learning model. We demonstrate the performance of the proposed RI protocol by testing its predictions against the experimentally known RI values of 112 optical polymers. Our approach to combine first-principles and data modeling emerges as both a successful and a highly economical path to determining the RI values for a wide range of organic polymers.

  14. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory.

    Science.gov (United States)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  15. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  16. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  17. A Real-Time Accurate Model and Its Predictive Fuzzy PID Controller for Pumped Storage Unit via Error Compensation

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2017-12-01

    Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.

  18. Respiratory variation in peak aortic velocity accurately predicts fluid responsiveness in children undergoing neurosurgery under general anesthesia.

    Science.gov (United States)

    Morparia, Kavita G; Reddy, Srijaya K; Olivieri, Laura J; Spaeder, Michael C; Schuette, Jennifer J

    2018-04-01

    The determination of fluid responsiveness in the critically ill child is of vital importance, more so as fluid overload becomes increasingly associated with worse outcomes. Dynamic markers of volume responsiveness have shown some promise in the pediatric population, but more research is needed before they can be adopted for widespread use. Our aim was to investigate effectiveness of respiratory variation in peak aortic velocity and pulse pressure variation to predict fluid responsiveness, and determine their optimal cutoff values. We performed a prospective, observational study at a single tertiary care pediatric center. Twenty-one children with normal cardiorespiratory status undergoing general anesthesia for neurosurgery were enrolled. Respiratory variation in peak aortic velocity (ΔVpeak ao) was measured both before and after volume expansion using a bedside ultrasound device. Pulse pressure variation (PPV) value was obtained from the bedside monitor. All patients received a 10 ml/kg fluid bolus as volume expansion, and were qualified as responders if stroke volume increased >15% as a result. Utility of ΔVpeak ao and PPV and to predict responsiveness to volume expansion was investigated. A baseline ΔVpeak ao value of greater than or equal to 12.3% best predicted a positive response to volume expansion, with a sensitivity of 77%, specificity of 89% and area under receiver operating characteristic curve of 0.90. PPV failed to demonstrate utility in this patient population. Respiratory variation in peak aortic velocity is a promising marker for optimization of perioperative fluid therapy in the pediatric population and can be accurately measured using bedside ultrasonography. More research is needed to evaluate the lack of effectiveness of pulse pressure variation for this purpose.

  19. Event Prediction for Modeling Mental Simulation in Naturalistic Decision Making

    National Research Council Canada - National Science Library

    Kunde, Dietmar

    2005-01-01

    ... and increasingly important asymmetric warfare scenarios. Although improvements in computer technology support more and more detailed representations, human decision making is still far from being automated in a realistic way...

  20. Episodic memories predict adaptive value-based decision-making

    Science.gov (United States)

    Murty, Vishnu; FeldmanHall, Oriel; Hunter, Lindsay E.; Phelps, Elizabeth A; Davachi, Lila

    2016-01-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory—specifically item versus associative memory—in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to re-engage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to re-engage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations—such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior. PMID:26999046

  1. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    Science.gov (United States)

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  2. Prediction of bread-making quality using size exclusion high ...

    African Journals Online (AJOL)

    Variation in the distribution of protein molecular weight in wheat (Triticum aestivum), influences breadmaking quality of wheat cultivars, resulting in either poor or good bread. The objective of this study was to predict breadmaking quality of wheat cultivars using size exclusion high performance liquid chromatography.

  3. What to make of Mendeleev’s predictions?

    DEFF Research Database (Denmark)

    Wray, K. Brad

    2018-01-01

    I critically examine Stewart’s (Found Chem, 2018) suggestion that we should weigh the various predictions Mendeleev made differently. I argue that in his effort to justify discounting the weight of some of Mendeleev’s failures, Stewart invokes a principle that will, in turn, reduce the weight of ...

  4. Attitude and Behavior Factors Associated with Front-of-Package Label Use with Label Users Making Accurate Product Nutrition Assessments.

    Science.gov (United States)

    Roseman, Mary G; Joung, Hyun-Woo; Littlejohn, Emily I

    2018-05-01

    Front-of-package (FOP) labels are increasing in popularity on retail products. Reductive FOP labels provide nutrient-specific information, whereas evaluative FOP labels summarize nutrient information through icons. Better understanding of consumer behavior regarding FOP labels is beneficial to increasing consumer use of nutrition labeling when making grocery purchasing decisions. We aimed to determine FOP label format effectiveness in aiding consumers at assessing nutrient density of food products. In addition, we sought to determine relationships between FOP label use and attitude toward healthy eating, diet self-assessment, self-reported health and nutrition knowledge, and label and shopping behaviors. A between-subjects experimental design was employed. Participants were randomly assigned to one of four label conditions: Facts Up Front, Facts Up Front Extended, a binary symbol, and no-label control. One hundred sixty-one US primary grocery shoppers, aged 18 to 69 years. Participants were randomly invited to the online study. Participants in one of four label condition groups viewed three product categories (cereal, dairy, and snacks) with corresponding questions. Adults' nutrition assessment of food products based on different FOP label formats, along with label use and attitude toward healthy eating, diet self-assessment, self-reported health and nutrition knowledge, and label and shopping behaviors. Data analyses included descriptive statistics, χ 2 tests, and logistical regression. Significant outcomes were set to α=.05. Participants selected the more nutrient-dense product in the snack food category when it contained an FOP label. Subjective health and nutrition knowledge and frequency of selecting food for healthful reasons were associated with FOP label use (P<0.01 and P<0.05, respectively). Both Facts Up Front (reductive) and binary (evaluative) FOP labels appear effective for nutrition assessment of snack products compared with no label. Specific

  5. Economic decision making and the application of nonparametric prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  6. A rapid and accurate approach for prediction of interactomes from co-elution data (PrInCE).

    Science.gov (United States)

    Stacey, R Greg; Skinnider, Michael A; Scott, Nichollas E; Foster, Leonard J

    2017-10-23

    An organism's protein interactome, or complete network of protein-protein interactions, defines the protein complexes that drive cellular processes. Techniques for studying protein complexes have traditionally applied targeted strategies such as yeast two-hybrid or affinity purification-mass spectrometry to assess protein interactions. However, given the vast number of protein complexes, more scalable methods are necessary to accelerate interaction discovery and to construct whole interactomes. We recently developed a complementary technique based on the use of protein correlation profiling (PCP) and stable isotope labeling in amino acids in cell culture (SILAC) to assess chromatographic co-elution as evidence of interacting proteins. Importantly, PCP-SILAC is also capable of measuring protein interactions simultaneously under multiple biological conditions, allowing the detection of treatment-specific changes to an interactome. Given the uniqueness and high dimensionality of co-elution data, new tools are needed to compare protein elution profiles, control false discovery rates, and construct an accurate interactome. Here we describe a freely available bioinformatics pipeline, PrInCE, for the analysis of co-elution data. PrInCE is a modular, open-source library that is computationally inexpensive, able to use label and label-free data, and capable of detecting tens of thousands of protein-protein interactions. Using a machine learning approach, PrInCE offers greatly reduced run time, more predicted interactions at the same stringency, prediction of protein complexes, and greater ease of use over previous bioinformatics tools for co-elution data. PrInCE is implemented in Matlab (version R2017a). Source code and standalone executable programs for Windows and Mac OSX are available at https://github.com/fosterlab/PrInCE , where usage instructions can be found. An example dataset and output are also provided for testing purposes. PrInCE is the first fast and easy

  7. Computational prediction of multidisciplinary team decision-making for adjuvant breast cancer drug therapies: a machine learning approach.

    Science.gov (United States)

    Lin, Frank P Y; Pokorny, Adrian; Teng, Christina; Dear, Rachel; Epstein, Richard J

    2016-12-01

    Multidisciplinary team (MDT) meetings are used to optimise expert decision-making about treatment options, but such expertise is not digitally transferable between centres. To help standardise medical decision-making, we developed a machine learning model designed to predict MDT decisions about adjuvant breast cancer treatments. We analysed MDT decisions regarding adjuvant systemic therapy for 1065 breast cancer cases over eight years. Machine learning classifiers with and without bootstrap aggregation were correlated with MDT decisions (recommended, not recommended, or discussable) regarding adjuvant cytotoxic, endocrine and biologic/targeted therapies, then tested for predictability using stratified ten-fold cross-validations. The predictions so derived were duly compared with those based on published (ESMO and NCCN) cancer guidelines. Machine learning more accurately predicted adjuvant chemotherapy MDT decisions than did simple application of guidelines. No differences were found between MDT- vs. ESMO/NCCN- based decisions to prescribe either adjuvant endocrine (97%, p = 0.44/0.74) or biologic/targeted therapies (98%, p = 0.82/0.59). In contrast, significant discrepancies were evident between MDT- and guideline-based decisions to prescribe chemotherapy (87%, p machine learning models. A machine learning approach based on clinicopathologic characteristics can predict MDT decisions about adjuvant breast cancer drug therapies. The discrepancy between MDT- and guideline-based decisions regarding adjuvant chemotherapy implies that certain non-clincopathologic criteria, such as patient preference and resource availability, are factored into clinical decision-making by local experts but not captured by guidelines.

  8. Trustworthiness and Negative Affect Predict Economic Decision-Making

    OpenAIRE

    Nguyen, Christopher M.; Koenigs, Michael; Yamada, Torricia H.; Teo, Shu Hao; Cavanaugh, Joseph E.; Tranel, Daniel; Denburg, Natalie L.

    2011-01-01

    The Ultimatum Game (UG) is a widely used and well-studied laboratory model of economic decision-making. Here, we studied 129 healthy adults and compared demographic (i.e., age, gender, education), cognitive (i.e., intelligence, attention/working memory, speed, language, visuospatial, memory, executive functions), and personality (i.e., “Big Five”, positive affect, negative affect) variables between those with a “rational” versus an “irrational” response pattern on the UG. Our data indicated t...

  9. Decision-Making Competence Predicts Domain-Specific Risk Attitudes

    Directory of Open Access Journals (Sweden)

    Joshua eWeller

    2015-05-01

    Full Text Available Decision Making Competence (DMC reflects individual differences in rational responding across several classic behavioral decision-making tasks. Although it has been associated with real-world risk behavior, less is known about the degree to which DMC contributes to specific components of risk attitudes. Utilizing a psychological risk-return framework, we examined the associations between risk attitudes and DMC. Italian community residents (n = 804 completed an online DMC measure, using a subset of the original Adult-DMC battery (A-DMC; Bruine de Bruin, Parker, & Fischhoff, 2007. Participants also completed a self-reported risk attitude measure for three components of risk attitudes (risk-taking, risk perceptions, and expected benefits across six risk domains. Overall, greater performance on the DMC component scales were inversely, albeit modestly, associated with risk-taking tendencies. Structural equation modeling results revealed that DMC was associated with lower perceived expected benefits for all domains. In contrast, its association with perceived risks was more domain-specific. These analyses also revealed stronger indirect effects for the DMC  expected benefits  risk-taking than the DMC  perceived risk  risk-taking path, especially for risk behaviors that may be considered more antisocial in nature. These results suggest that DMC performance differentially impacts specific components of risk attitudes, and may be more strongly related to the evaluation of expected value of the given behavior.

  10. Trustworthiness and Negative Affect Predict Economic Decision-Making.

    Science.gov (United States)

    Nguyen, Christopher M; Koenigs, Michael; Yamada, Torricia H; Teo, Shu Hao; Cavanaugh, Joseph E; Tranel, Daniel; Denburg, Natalie L

    2011-09-01

    The Ultimatum Game (UG) is a widely used and well-studied laboratory model of economic decision-making. Here, we studied 129 healthy adults and compared demographic (i.e., age, gender, education), cognitive (i.e., intelligence, attention/working memory, speed, language, visuospatial, memory, executive functions), and personality (i.e., "Big Five", positive affect, negative affect) variables between those with a "rational" versus an "irrational" response pattern on the UG. Our data indicated that participants with "rational" UG performance (accepting any offer, no matter the fairness) endorsed higher levels of trust, or the belief in the sincerity and good intentions of others, while participants with "irrational" UG performance (rejecting unfair offers) endorsed higher levels of negative affect, such as anger and contempt. These personality variables were the only ones that differentiated the two response patterns-demographic and cognitive factors did not differ between rational and irrational players. The results indicate that the examination of personality and affect is crucial to our understanding of the individual differences that underlie decision-making.

  11. Clarification of Employer’s Continuing Obligation To Make and Maintain an Accurate Record of Each Recordable Injury and Illness. Final rule.

    Science.gov (United States)

    2017-05-03

    Under the Congressional Review Act, Congress has passed, and the President has signed, Public Law 115-21, a resolution of disapproval of OSHA's final rule titled, "Clarification of Employer's Continuing Obligation to Make and Maintain an Accurate Record of each Recordable Injury and Illness." OSHA published the rule, which contained various amendments to OSHA's recordkeeping regulations, on December 19, 2016. The amendments became effective on January 18, 2017. Because Public Law 115-21 invalidates the amendments to OSHA's recordkeeping regulations contained in the rule promulgated on December 19, 2016, OSHA is hereby removing those amendments from the Code of Federal Regulations.

  12. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    Science.gov (United States)

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  13. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    Science.gov (United States)

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  14. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry

    DEFF Research Database (Denmark)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe

    2018-01-01

    artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model......Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect...

  15. Towards accurate prediction of unbalance response, oil whirl and oil whip of flexible rotors supported by hydrodynamic bearings

    NARCIS (Netherlands)

    Eling, R.P.T.; te Wierik, M.; van Ostayen, R.A.J.; Rixen, D.J.

    2016-01-01

    Journal bearings are used to support rotors in a wide range of applications. In order to ensure reliable operation, accurate analyses of these rotor-bearing systems are crucial. Coupled analysis of the rotor and the journal bearing is essential in the case that the rotor is flexible. The accuracy of

  16. Total reference air kerma can accurately predict isodose surface volumes in cervix cancer brachytherapy. A multicenter study

    DEFF Research Database (Denmark)

    Nkiwane, Karen S; Andersen, Else; Champoudry, Jerome

    2017-01-01

    PURPOSE: To demonstrate that V60 Gy, V75 Gy, and V85 Gy isodose surface volumes can be accurately estimated from total reference air kerma (TRAK) in cervix cancer MRI-guided brachytherapy (BT). METHODS AND MATERIALS: 60 Gy, 75 Gy, and 85 Gy isodose surface volumes levels were obtained from treatm...

  17. Accurate particle speed prediction by improved particle speed measurement and 3-dimensional particle size and shape characterization technique

    DEFF Research Database (Denmark)

    Cernuschi, Federico; Rothleitner, Christian; Clausen, Sønnik

    2017-01-01

    Accurate particle mass and velocity measurement is needed for interpreting test results in erosion tests of materials and coatings. The impact and damage of a surface is influenced by the kinetic energy of a particle, i.e. particle mass and velocity. Particle mass is usually determined with optic...

  18. Accurate pan-specific prediction of peptide-MHC class II binding affinity with improved binding core identification

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Karosiene, Edita; Rasmussen, Michael

    2015-01-01

    with known binding registers, the new method NetMHCIIpan-3.1 significantly outperformed the earlier 3.0 version. We illustrate the impact of accurate binding core identification for the interpretation of T cell cross-reactivity using tetramer double staining with a CMV epitope and its variants mapped...

  19. The origin of anomalous transport in porous media - is it possible to make a priori predictions?

    Science.gov (United States)

    Bijeljic, Branko; Blunt, Martin

    2013-04-01

    Despite the range of significant applications of flow and solute transport in porous rock, including contaminant migration in subsurface hydrology, geological storage of carbon-dioxide and tracer studies and miscible displacement in oil recovery, even the qualitative behavior in the subsurface is uncertain. The non-Fickian nature of dispersive processes in heterogeneous porous media has been demonstrated experimentally from pore to field scales. However, the exact relationship between structure, velocity field and transport has not been fully understood. Advances in X ray imaging techniques made it possible to accurately describe structure of the pore space, helping predict flow and anomalous transport behaviour using direct simulation. This is demonstrated by simulating solute transport through 3D images of rock samples, with resolutions of a few microns, representing geological media of increasing pore-scale complexity: a sandpack, a sandstone, and a carbonate. A novel methodology is developed that predicts solute transport at the pore scale by using probability density functions of displacement (propagators) and probability density function of transit time between the image voxels, and relates it to probability density function of normalized local velocity. A key advantage is that full information on velocity and solute concentration is retained in the models. The methodology includes solving for Stokes flow by Open Foam, solving for advective transport by the novel streamline simulation method, and superimposing diffusive transport diffusion by the random walk method. It is shown how computed propagators for beadpack, sandstone and carbonate depend on the spread in the velocity distribution. A narrow velocity distribution in the beadpack leads to the least anomalous behaviour where the propagators rapidly become Gaussian; the wider velocity distribution in the sandstone gives rise to a small immobile concentration peak, and a large secondary mobile peak moving

  20. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  1. Accurate prediction of the functional significance of single nucleotide polymorphisms and mutations in the ABCA1 gene.

    Directory of Open Access Journals (Sweden)

    Liam R Brunham

    2005-12-01

    Full Text Available The human genome contains an estimated 100,000 to 300,000 DNA variants that alter an amino acid in an encoded protein. However, our ability to predict which of these variants are functionally significant is limited. We used a bioinformatics approach to define the functional significance of genetic variation in the ABCA1 gene, a cholesterol transporter crucial for the metabolism of high density lipoprotein cholesterol. To predict the functional consequence of each coding single nucleotide polymorphism and mutation in this gene, we calculated a substitution position-specific evolutionary conservation score for each variant, which considers site-specific variation among evolutionarily related proteins. To test the bioinformatics predictions experimentally, we evaluated the biochemical consequence of these sequence variants by examining the ability of cell lines stably transfected with the ABCA1 alleles to elicit cholesterol efflux. Our bioinformatics approach correctly predicted the functional impact of greater than 94% of the naturally occurring variants we assessed. The bioinformatics predictions were significantly correlated with the degree of functional impairment of ABCA1 mutations (r2 = 0.62, p = 0.0008. These results have allowed us to define the impact of genetic variation on ABCA1 function and to suggest that the in silico evolutionary approach we used may be a useful tool in general for predicting the effects of DNA variation on gene function. In addition, our data suggest that considering patterns of positive selection, along with patterns of negative selection such as evolutionary conservation, may improve our ability to predict the functional effects of amino acid variation.

  2. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  3. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    Directory of Open Access Journals (Sweden)

    Sergei L Kosakovsky Pond

    2009-11-01

    Full Text Available Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1 are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5% fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance

  4. MFPred: Rapid and accurate prediction of protein-peptide recognition multispecificity using self-consistent mean field theory.

    Directory of Open Access Journals (Sweden)

    Aliza B Rubenstein

    2017-06-01

    Full Text Available Multispecificity-the ability of a single receptor protein molecule to interact with multiple substrates-is a hallmark of molecular recognition at protein-protein and protein-peptide interfaces, including enzyme-substrate complexes. The ability to perform structure-based prediction of multispecificity would aid in the identification of novel enzyme substrates, protein interaction partners, and enable design of novel enzymes targeted towards alternative substrates. The relatively slow speed of current biophysical, structure-based methods limits their use for prediction and, especially, design of multispecificity. Here, we develop a rapid, flexible-backbone self-consistent mean field theory-based technique, MFPred, for multispecificity modeling at protein-peptide interfaces. We benchmark our method by predicting experimentally determined peptide specificity profiles for a range of receptors: protease and kinase enzymes, and protein recognition modules including SH2, SH3, MHC Class I and PDZ domains. We observe robust recapitulation of known specificities for all receptor-peptide complexes, and comparison with other methods shows that MFPred results in equivalent or better prediction accuracy with a ~10-1000-fold decrease in computational expense. We find that modeling bound peptide backbone flexibility is key to the observed accuracy of the method. We used MFPred for predicting with high accuracy the impact of receptor-side mutations on experimentally determined multispecificity of a protease enzyme. Our approach should enable the design of a wide range of altered receptor proteins with programmed multispecificities.

  5. Accurate Traffic Flow Prediction in Heterogeneous Vehicular Networks in an Intelligent Transport System Using a Supervised Non-Parametric Classifier

    Directory of Open Access Journals (Sweden)

    Hesham El-Sayed

    2018-05-01

    Full Text Available Heterogeneous vehicular networks (HETVNETs evolve from vehicular ad hoc networks (VANETs, which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs. The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM kernels with a radial basis function (RBF. The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.

  6. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages

    NARCIS (Netherlands)

    Vlaming, M.L.; Duijn, E. van; Dillingh, M.R.; Brands, R.; Windhorst, A.D.; Hendrikse, N.H.; Bosgra, S.; Burggraaf, J.; Koning, M.C. de; Fidder, A.; Mocking, J.A.; Sandman, H.; Ligt, R.A. de; Fabriek, B.O.; Pasman, W.J.; Seinen, W.; Alves, T.; Carrondo, M.; Peixoto, C.; Peeters, P.A.; Vaes, W.H.

    2015-01-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of

  7. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  8. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction II: Nonplanar Molecules.

    Science.gov (United States)

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-11-14

    The crystal structure prediction (CSP) of a given compound from its molecular diagram is a fundamental challenge in computational chemistry with implications in relevant technological fields. A key component of CSP is the method to calculate the lattice energy of a crystal, which allows the ranking of candidate structures. This work is the second part of our investigation to assess the potential of the exchange-hole dipole moment (XDM) dispersion model for crystal structure prediction. In this article, we study the relatively large, nonplanar, mostly flexible molecules in the first five blind tests held by the Cambridge Crystallographic Data Centre. Four of the seven experimental structures are predicted as the energy minimum, and thermal effects are demonstrated to have a large impact on the ranking of at least another compound. As in the first part of this series, delocalization error affects the results for a single crystal (compound X), in this case by detrimentally overstabilizing the π-conjugated conformation of the monomer. Overall, B86bPBE-XDM correctly predicts 16 of the 21 compounds in the five blind tests, a result similar to the one obtained using the best CSP method available to date (dispersion-corrected PW91 by Neumann et al.). Perhaps more importantly, the systems for which B86bPBE-XDM fails to predict the experimental structure as the energy minimum are mostly the same as with Neumann's method, which suggests that similar difficulties (absence of vibrational free energy corrections, delocalization error,...) are not limited to B86bPBE-XDM but affect GGA-based DFT-methods in general. Our work confirms B86bPBE-XDM as an excellent option for crystal energy ranking in CSP and offers a guide to identify crystals (organic salts, conjugated flexible systems) where difficulties may appear.

  9. East London Modified-Broset as Decision-Making Tool to Predict Seclusion in Psychiatric Intensive Care Units

    Directory of Open Access Journals (Sweden)

    Felice Loi

    2017-10-01

    Full Text Available Seclusion is a last resort intervention for management of aggressive behavior in psychiatric settings. There is no current objective and practical decision-making instrument for seclusion use on psychiatric wards. Our aim was to test the predictive and discriminatory characteristics of the East London Modified-Broset (ELMB, to delineate its decision-making profile for seclusion of adult psychiatric patients, and second to benchmark it against the psychometric properties of the Broset Violence Checklist (BVC. ELMB, an 8-item modified version of the 6-item BVC, was retrospectively employed to evaluate the seclusion decision-making process on two Psychiatric Intensive Care Units (patients n = 201; incidents n = 2,187. Data analyses were carried out using multivariate regression and Receiver Operating Characteristic (ROC curves. Predictors of seclusion were: physical violence toward staff/patients OR = 24.2; non-compliance with PRN (pro re nata medications OR = 9.8; and damage to hospital property OR = 2.9. ROC analyses indicated that ELMB was significantly more accurate that BVC, with higher sensitivity, specificity, and positive likelihood ratio. Results were similar across gender. The ELMB is a sensitive and specific instrument that can be used to guide the decision-making process when implementing seclusion.

  10. East London Modified-Broset as Decision-Making Tool to Predict Seclusion in Psychiatric Intensive Care Units.

    Science.gov (United States)

    Loi, Felice; Marlowe, Karl

    2017-01-01

    Seclusion is a last resort intervention for management of aggressive behavior in psychiatric settings. There is no current objective and practical decision-making instrument for seclusion use on psychiatric wards. Our aim was to test the predictive and discriminatory characteristics of the East London Modified-Broset (ELMB), to delineate its decision-making profile for seclusion of adult psychiatric patients, and second to benchmark it against the psychometric properties of the Broset Violence Checklist (BVC). ELMB, an 8-item modified version of the 6-item BVC, was retrospectively employed to evaluate the seclusion decision-making process on two Psychiatric Intensive Care Units (patients n  = 201; incidents n  = 2,187). Data analyses were carried out using multivariate regression and Receiver Operating Characteristic (ROC) curves. Predictors of seclusion were: physical violence toward staff/patients OR = 24.2; non-compliance with PRN (pro re nata) medications OR = 9.8; and damage to hospital property OR = 2.9. ROC analyses indicated that ELMB was significantly more accurate that BVC, with higher sensitivity, specificity, and positive likelihood ratio. Results were similar across gender. The ELMB is a sensitive and specific instrument that can be used to guide the decision-making process when implementing seclusion.

  11. Prediction of Positions of Active Compounds Makes It Possible To Increase Activity in Fragment-Based Drug Development

    Directory of Open Access Journals (Sweden)

    Yoshifumi Fukunishi

    2011-05-01

    Full Text Available We have developed a computational method that predicts the positions of active compounds, making it possible to increase activity as a fragment evolution strategy. We refer to the positions of these compounds as the active position. When an active fragment compound is found, the following lead generation process is performed, primarily to increase activity. In the current method, to predict the location of the active position, hydrogen atoms are replaced by small side chains, generating virtual compounds. These virtual compounds are docked to a target protein, and the docking scores (affinities are examined. The hydrogen atom that gives the virtual compound with good affinity should correspond to the active position and it should be replaced to generate a lead compound. This method was found to work well, with the prediction of the active position being 2 times more efficient than random synthesis. In the current study, 15 examples of lead generation were examined. The probability of finding active positions among all hydrogen atoms was 26%, and the current method accurately predicted 60% of the active positions.

  12. The predictive accuracy of PREDICT: a personalized decision-making tool for Southeast Asian women with breast cancer.

    Science.gov (United States)

    Wong, Hoong-Seam; Subramaniam, Shridevi; Alias, Zarifah; Taib, Nur Aishah; Ho, Gwo-Fuang; Ng, Char-Hong; Yip, Cheng-Har; Verkooijen, Helena M; Hartman, Mikael; Bhoo-Pathy, Nirmala

    2015-02-01

    Web-based prognostication tools may provide a simple and economically feasible option to aid prognostication and selection of chemotherapy in early breast cancers. We validated PREDICT, a free online breast cancer prognostication and treatment benefit tool, in a resource-limited setting. All 1480 patients who underwent complete surgical treatment for stages I to III breast cancer from 1998 to 2006 were identified from the prospective breast cancer registry of University Malaya Medical Centre, Kuala Lumpur, Malaysia. Calibration was evaluated by comparing the model-predicted overall survival (OS) with patients' actual OS. Model discrimination was tested using receiver-operating characteristic (ROC) analysis. Median age at diagnosis was 50 years. The median tumor size at presentation was 3 cm and 54% of patients had lymph node-negative disease. About 55% of women had estrogen receptor-positive breast cancer. Overall, the model-predicted 5 and 10-year OS was 86.3% and 77.5%, respectively, whereas the observed 5 and 10-year OS was 87.6% (difference: -1.3%) and 74.2% (difference: 3.3%), respectively; P values for goodness-of-fit test were 0.18 and 0.12, respectively. The program was accurate in most subgroups of patients, but significantly overestimated survival in patients aged discrimination; areas under ROC curve were 0.78 (95% confidence interval [CI]: 0.74-0.81) and 0.73 (95% CI: 0.68-0.78) for 5 and 10-year OS, respectively. Based on its accurate performance in this study, PREDICT may be clinically useful in prognosticating women with breast cancer and personalizing breast cancer treatment in resource-limited settings.

  13. Can Vrancea earthquakes be accurately predicted from unusual bio-system behavior and seismic-electromagnetic records?

    International Nuclear Information System (INIS)

    Enescu, D.; Chitaru, C.; Enescu, B.D.

    1999-01-01

    The relevance of bio-seismic research for the short-term prediction of strong Vrancea earthquakes is underscored. An unusual animal behavior before and during Vrancea earthquakes is described and illustrated in the individual case of the major earthquake of March 4, 1977. Several hypotheses to account for the uncommon behavior of bio-systems in relation to earthquakes in general and strong Vrancea earthquakes in particular are discussed in the second section. It is reminded that promising preliminary results concerning the identification of seismic-electromagnetic precursor signals have been obtained in the Vrancea seismogenic area using special, highly sensitive equipment. The need to correlate bio-seismic and seismic-electromagnetic researches is evident. Further investigations are suggested and urgent steps are proposed in order to achieve a successful short-term prediction of strong Vrancea earthquakes. (authors)

  14. Unsteady Reynolds averaged Navier-Stokes: toward accurate predictions in fuel-bundles and T-junctions

    International Nuclear Information System (INIS)

    Merzari, E.; Ninokata, H.; Baglietto, E.

    2008-01-01

    Traditional steady-state simulation and turbulence modelling are not always reliable. Even in simple flows, the results can be not accurate when particular conditions occur. Examples are buoyancy, flow oscillations, and turbulent mixing. Often, unsteady simulations are necessary, but they tend to be computationally not affordable. The Unsteady Reynolds Averaged Navier-Stokes (URANS) approach holds promise to be less computational expensive than Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS), reaching a considerable degree of accuracy. Moreover, URANS methodologies do not need complex boundary formulations for the inlet and the outlet like LES or DNS. The Test cases for this methodology will be Fuel Bundles and T-junctions. Tight-Fuel Rod-Bundles present large scale coherent structures than cannot be taken into account by a simple steady-state simulation. T-junctions where a hot fluid and a cold fluid mix present temperature fluctuations and therefore thermal fatigue. For both cases the capacity of the methodology to reproduce the flow field are assessed and it is evaluated that URANS holds promise to be the industrial standard in nuclear engineering applications that do not involve buoyancy. The codes employed are STAR-CD 3.26 and 4.06. (author)

  15. Accuration of Time Series and Spatial Interpolation Method for Prediction of Precipitation Distribution on the Geographical Information System

    Science.gov (United States)

    Prasetyo, S. Y. J.; Hartomo, K. D.

    2018-01-01

    The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some

  16. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain.

    Science.gov (United States)

    Panel, Nicolas; Sun, Young Joo; Fuentes, Ernesto J; Simonson, Thomas

    2017-01-01

    PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB) continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or "PB/LIE" free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α 2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo . The overall performance of the model should allow its use in the design of new PDZ ligands in the future.

  17. An application of a relational database system for high-throughput prediction of elemental compositions from accurate mass values.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke

    2013-01-15

    High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.

  18. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain

    Directory of Open Access Journals (Sweden)

    Nicolas Panel

    2017-09-01

    Full Text Available PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or “PB/LIE” free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo. The overall performance of the model should allow its use in the design of new PDZ ligands in the future.

  19. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    Science.gov (United States)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  20. Genomic inference accurately predicts the timing and severity of a recent bottleneck in a non-model insect population

    Science.gov (United States)

    McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.

    2015-01-01

    The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665

  1. Number of bodily symptoms predicts outcome more accurately than health anxiety in patients attending neurology, cardiology, and gastroenterology clinics.

    Science.gov (United States)

    Jackson, Judy; Fiddler, Maggie; Kapur, Navneet; Wells, Adrian; Tomenson, Barbara; Creed, Francis

    2006-04-01

    In consecutive new outpatients, we aimed to assess whether somatization and health anxiety predicted health care use and quality of life 6 months later in all patients or in those without demonstrable abnormalities. On the first clinic visit, participants completed the Illness Perception Questionnaire (IPQ), the Health Anxiety Questionnaire (HAQ), and the Hospital Anxiety and Depression Scale (HADS). Outcome was assessed as: (a) the number of medical consultations over the subsequent 6 months, extracted from medical records, and (b) Short-Form Health Survey 36 (SF36) physical component score 6 months after index clinic visit. A total of 295 patients were recruited (77% response rate), and medical consultation data were available for 275. The number of bodily symptoms was associated with both outcomes in linear fashion (Psomatization and hypochondriasis.

  2. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  3. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    Science.gov (United States)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  4. Does 99mTc MAA study accurately predict the Hepatopulmonary shunt fraction of 90Y theraspheres?

    International Nuclear Information System (INIS)

    Jha, Ashish; Zade, A.; Monteiro, P.; Shah, S.; Purandare, N.C.; Rangarajan, V.; Kulkarni, S.; Kulkarni, A.; Shetty, Nitin

    2010-01-01

    Full text: Transarterial-radioembolisation (TARE) is FDA approved therapeutic option for primary and metastatic liver malignancy when patient is inoperable; which in addition to the embolic effect (as seen with Transarterial- chemoembolisation-TACE) also gives the benefit of selectively irradiation to the target lesions with minimal toxicity to adjacent normal hepatocytes. However there is a risk of shunting of radioactive spheres to pulmonary circulation and subsequent pulmonary toxicity if the hepatopulmonary shunt fraction is high. The estimated lung dose becomes the limiting factor for the dose that can be delivered trans-arterially for radioembolisation of hepatic neoplasms.This is achieved by a pretreatment 99m Tc MAA study. Aim: The accuracy of 99m Tc-MAA Scintigraphy to predict the hepatopulmonary shunt fraction of 90 Y Theraspheres was evaluated by comparing it with that obtained by post therapeutic Bremsstrahlung imaging. Materials and Methods: Patients: 13 patients who underwent 90 Y Theraspheres radioembolisation of hepatic malignancies (both primary and secondary) underwent pre therapeutic 99m Tc- MAA Scintigraphy and post therapeutic 90 Y Bremsstrahlung Scintigraphy. 10-12 mCi of freshly prepared 99m Tc MAA was administered by selective hepatic artery cauterization. Planar and tomographic images were acquired within 1hr of radiopharmaceutical administration. IMAGE ACQUISITION 99m Tc MAA static images were acquired in 256 x 256 matrix (1000 KCnts) and SPECT were a 128 x 128 matrix with 64 frames (20 s/frame). The scan parameters for CT were 140 kV, 2.5 mAs, and 1-cm slices. SPECT images were corrected for attenuation and scatter. Post therapeutic 90 Y Bremsstrahlung imaging was done with HEGP collimator with photo peak centered at 140 KeV - 64.29% and +56% window width. SPECT/CT images were obtained using a dual-detector gamma-camera with a mounted 1-row CT scanner (Infinia Hawkeye; GE medical systems) to evaluate hepatic and extra hepatic tracer

  5. Combining Mean and Standard Deviation of Hounsfield Unit Measurements from Preoperative CT Allows More Accurate Prediction of Urinary Stone Composition Than Mean Hounsfield Units Alone.

    Science.gov (United States)

    Tailly, Thomas; Larish, Yaniv; Nadeau, Brandon; Violette, Philippe; Glickman, Leonard; Olvera-Posada, Daniel; Alenezi, Husain; Amann, Justin; Denstedt, John; Razvi, Hassan

    2016-04-01

    The mineral composition of a urinary stone may influence its surgical and medical treatment. Previous attempts at identifying stone composition based on mean Hounsfield Units (HUm) have had varied success. We aimed to evaluate the additional use of standard deviation of HU (HUsd) to more accurately predict stone composition. We identified patients from two centers who had undergone urinary stone treatment between 2006 and 2013 and had mineral stone analysis and a computed tomography (CT) available. HUm and HUsd of the stones were compared with ANOVA. Receiver operative characteristic analysis with area under the curve (AUC), Youden index, and likelihood ratio calculations were performed. Data were available for 466 patients. The major components were calcium oxalate monohydrate (COM), uric acid, hydroxyapatite, struvite, brushite, cystine, and CO dihydrate (COD) in 41.4%, 19.3%, 12.4%, 7.5%, 5.8%, 5.4%, and 4.7% of patients, respectively. The HUm of UA and Br was significantly lower and higher than the HUm of any other stone type, respectively. HUm and HUsd were most accurate in predicting uric acid with an AUC of 0.969 and 0.851, respectively. The combined use of HUm and HUsd resulted in increased positive predictive value and higher likelihood ratios for identifying a stone's mineral composition for all stone types but COM. To the best of our knowledge, this is the first report of CT data aiding in the prediction of brushite stone composition. Both HUm and HUsd can help predict stone composition and their combined use results in higher likelihood ratios influencing probability.

  6. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    Science.gov (United States)

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  7. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    Science.gov (United States)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  8. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry.

    Science.gov (United States)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe; Linnet, Kristian; Barron, Leon Patrick

    2018-03-23

    Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect and non-targeted screening. These allow for tentative identification of new compounds, and in-silico predicted reference values are used for improving confidence and filtering false-positive identifications. In this work, predictions of both RT and CCS values are performed with machine learning using artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model was investigated for the first time. The optimized combined RT-CCS model was a four-layered multi-layer perceptron ANN, and the 95th prediction error percentiles were within 2 min RT error and 5% relative CCS error for the external validation set (n = 36) and the full RT-CCS dataset (n = 357). 88.6% (n = 733) of predicted RTs were within 2 min error for the full dataset. Overall, when using 2 min RT error and 5% relative CCS error, 91.9% (n = 328) of compounds were retained, while 99.4% (n = 355) were retained when using at least one of these thresholds. This combined prediction approach can therefore be useful for rapid suspect/non-targeted screening involving HRMS, and will support current workflows. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    Science.gov (United States)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed

  10. Predicting College Students' First Year Success: Should Soft Skills Be Taken into Consideration to More Accurately Predict the Academic Achievement of College Freshmen?

    Science.gov (United States)

    Powell, Erica Dion

    2013-01-01

    This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…

  11. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  12. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination

    Directory of Open Access Journals (Sweden)

    Jiun-Hung Geng

    2015-01-01

    Full Text Available Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD were analyzed. In all, 197 (60% were classified as stone-free and 132 (40% as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA, abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals: 9.49 (3.72–24.20, 2.25 (1.22–4.14, 2.20 (1.10–4.40, and 2.89 (1.35–6.21 respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these

  13. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination.

    Science.gov (United States)

    Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun

    2015-01-01

    Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting

  14. In 'big bang' major incidents do triage tools accurately predict clinical priority?: a systematic review of the literature.

    Science.gov (United States)

    Kilner, T M; Brace, S J; Cooke, M W; Stallard, N; Bleetman, A; Perkins, G D

    2011-05-01

    The term "big bang" major incidents is used to describe sudden, usually traumatic,catastrophic events, involving relatively large numbers of injured individuals, where demands on clinical services rapidly outstrip the available resources. Triage tools support the pre-hospital provider to prioritise which patients to treat and/or transport first based upon clinical need. The aim of this review is to identify existing triage tools and to determine the extent to which their reliability and validity have been assessed. A systematic review of the literature was conducted to identify and evaluate published data validating the efficacy of the triage tools. Studies using data from trauma patients that report on the derivation, validation and/or reliability of the specific pre-hospital triage tools were eligible for inclusion.Purely descriptive studies, reviews, exercises or reports (without supporting data) were excluded. The search yielded 1982 papers. After initial scrutiny of title and abstract, 181 papers were deemed potentially applicable and from these 11 were identified as relevant to this review (in first figure). There were two level of evidence one studies, three level of evidence two studies and six level of evidence three studies. The two level of evidence one studies were prospective validations of Clinical Decision Rules (CDR's) in children in South Africa, all the other studies were retrospective CDR derivation, validation or cohort studies. The quality of the papers was rated as good (n=3), fair (n=7), poor (n=1). There is limited evidence for the validity of existing triage tools in big bang major incidents.Where evidence does exist it focuses on sensitivity and specificity in relation to prediction of trauma death or severity of injury based on data from single or small number patient incidents. The Sacco system is unique in combining survivability modelling with the degree by which the system is overwhelmed in the triage decision system. The

  15. Perceived Physician-informed Weight Status Predicts Accurate Weight Self-Perception and Weight Self-Regulation in Low-income, African American Women.

    Science.gov (United States)

    Harris, Charlie L; Strayhorn, Gregory; Moore, Sandra; Goldman, Brian; Martin, Michelle Y

    2016-01-01

    Obese African American women under-appraise their body mass index (BMI) classification and report fewer weight loss attempts than women who accurately appraise their weight status. This cross-sectional study examined whether physician-informed weight status could predict weight self-perception and weight self-regulation strategies in obese women. A convenience sample of 118 low-income women completed a survey assessing demographic characteristics, comorbidities, weight self-perception, and weight self-regulation strategies. BMI was calculated during nurse triage. Binary logistic regression models were performed to test hypotheses. The odds of obese accurate appraisers having been informed about their weight status were six times greater than those of under-appraisers. The odds of those using an "approach" self-regulation strategy having been physician-informed were four times greater compared with those using an "avoidance" strategy. Physicians are uniquely positioned to influence accurate weight self-perception and adaptive weight self-regulation strategies in underserved women, reducing their risk for obesity-related morbidity.

  16. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  17. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  18. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  19. Decision Styles and Rationality: An Analysis of the Predictive Validity of the General Decision-Making Style Inventory

    Science.gov (United States)

    Curseu, Petru Lucian; Schruijer, Sandra G. L.

    2012-01-01

    This study investigates the relationship between the five decision-making styles evaluated by the General Decision-Making Style Inventory, indecisiveness, and rationality in decision making. Using a sample of 102 middle-level managers, the results show that the rational style positively predicts rationality in decision making and negatively…

  20. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Directory of Open Access Journals (Sweden)

    Sheila M Reynolds

    2010-07-01

    Full Text Available DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the

  1. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Science.gov (United States)

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  2. Learning a Weighted Sequence Model of the Nucleosome Core and Linker Yields More Accurate Predictions in Saccharomyces cerevisiae and Homo sapiens

    Science.gov (United States)

    Reynolds, Sheila M.; Bilmes, Jeff A.; Noble, William Stafford

    2010-01-01

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence—301 base pairs, centered at the position to be scored—with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  3. Application of structural reliability and risk assessment to life prediction and life extension decision making

    International Nuclear Information System (INIS)

    Meyer, T.A.; Balkey, K.R.; Bishop, B.A.

    1987-01-01

    There can be numerous uncertainties involved in performing component life assessments. In addition, sufficient data may be unavailable to make a useful life prediction. Structural Reliability and Risk Assessment (SRRA) is primarily an analytical methodology or tool that quantifies the impact of uncertainties on the structural life of plant components and can address the lack of data in component life prediction. As a prelude to discussing the technical aspects of SRRA, a brief review of general component life prediction methods is first made so as to better develop an understanding of the role of SRRA in such evaluations. SRRA is then presented as it is applied in component life evaluations with example applications being discussed for both nuclear and non-nuclear components

  4. Make

    CERN Document Server

    Frauenfelder, Mark

    2012-01-01

    The first magazine devoted entirely to do-it-yourself technology projects presents its 29th quarterly edition for people who like to tweak, disassemble, recreate, and invent cool new uses for technology. MAKE Volume 29 takes bio-hacking to a new level. Get introduced to DIY tracking devices before they hit the consumer electronics marketplace. Learn how to build an EKG machine to study your heartbeat, and put together a DIY bio lab to study athletic motion using consumer grade hardware.

  5. Did Ptolemy make novel predictions? Launching Ptolemaic astronomy into the scientific realism debate.

    Science.gov (United States)

    Carman, Christián; Díez, José

    2015-08-01

    The goal of this paper, both historical and philosophical, is to launch a new case into the scientific realism debate: geocentric astronomy. Scientific realism about unobservables claims that the non-observational content of our successful/justified empirical theories is true, or approximately true. The argument that is currently considered the best in favor of scientific realism is the No Miracles Argument: the predictive success of a theory that makes (novel) observational predictions while making use of non-observational content would be inexplicable unless such non-observational content approximately corresponds to the world "out there". Laudan's pessimistic meta-induction challenged this argument, and realists reacted by moving to a "selective" version of realism: the approximately true part of the theory is not its full non-observational content but only the part of it that is responsible for the novel, successful observational predictions. Selective scientific realism has been tested against some of the theories in Laudan's list, but the first member of this list, geocentric astronomy, has been traditionally ignored. Our goal here is to defend that Ptolemy's Geocentrism deserves attention and poses a prima facie strong case against selective realism, since it made several successful, novel predictions based on theoretical hypotheses that do not seem to be retained, not even approximately, by posterior theories. Here, though, we confine our work just to the detailed reconstruction of what we take to be the main novel, successful Ptolemaic predictions, leaving the full analysis and assessment of their significance for the realist thesis to future works. Copyright © 2015. Published by Elsevier Ltd.

  6. Metabolite signal identification in accurate mass metabolomics data with MZedDB, an interactive m/z annotation tool utilising predicted ionisation behaviour 'rules'

    Directory of Open Access Journals (Sweden)

    Snowdon Stuart

    2009-07-01

    Full Text Available Abstract Background Metabolomics experiments using Mass Spectrometry (MS technology measure the mass to charge ratio (m/z and intensity of ionised molecules in crude extracts of complex biological samples to generate high dimensional metabolite 'fingerprint' or metabolite 'profile' data. High resolution MS instruments perform routinely with a mass accuracy of Results Metabolite 'structures' harvested from publicly accessible databases were converted into a common format to generate a comprehensive archive in MZedDB. 'Rules' were derived from chemical information that allowed MZedDB to generate a list of adducts and neutral loss fragments putatively able to form for each structure and calculate, on the fly, the exact molecular weight of every potential ionisation product to provide targets for annotation searches based on accurate mass. We demonstrate that data matrices representing populations of ionisation products generated from different biological matrices contain a large proportion (sometimes > 50% of molecular isotopes, salt adducts and neutral loss fragments. Correlation analysis of ESI-MS data features confirmed the predicted relationships of m/z signals. An integrated isotope enumerator in MZedDB allowed verification of exact isotopic pattern distributions to corroborate experimental data. Conclusion We conclude that although ultra-high accurate mass instruments provide major insight into the chemical diversity of biological extracts, the facile annotation of a large proportion of signals is not possible by simple, automated query of current databases using computed molecular formulae. Parameterising MZedDB to take into account predicted ionisation behaviour and the biological source of any sample improves greatly both the frequency and accuracy of potential annotation 'hits' in ESI-MS data.

  7. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    Science.gov (United States)

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  8. Accounting for Interference, Scattering, and Electrode Absorption to Make Accurate Internal Quantum Efficiency Measurements in Organic and Other Thin Solar Cells

    KAUST Repository

    Burkhard, George F.; Hoke, Eric T.; McGehee, Michael D.

    2010-01-01

    Accurately measuring internal quantum efficiency requires knowledge of absorption in the active layer of a solar cell. The experimentally accessible total absorption includes significant contributions from the electrodes and other nonactive layers. We suggest a straightforward method for calculating the active layer contribution that minimizes error by subtracting optically-modeled electrode absorption from experimentally measured total absorption. (Figure Presented) © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Accounting for Interference, Scattering, and Electrode Absorption to Make Accurate Internal Quantum Efficiency Measurements in Organic and Other Thin Solar Cells

    KAUST Repository

    Burkhard, George F.

    2010-05-31

    Accurately measuring internal quantum efficiency requires knowledge of absorption in the active layer of a solar cell. The experimentally accessible total absorption includes significant contributions from the electrodes and other nonactive layers. We suggest a straightforward method for calculating the active layer contribution that minimizes error by subtracting optically-modeled electrode absorption from experimentally measured total absorption. (Figure Presented) © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    Science.gov (United States)

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  11. Predicting of Physiological Changes through Personality Traits and Decision Making Styles

    Directory of Open Access Journals (Sweden)

    Saeed Imani

    2016-12-01

    Full Text Available Background and Objective: One of the important concepts of social psychology is cognitive dissonance. When our practice is in conflict with our previous attitudes often change our attitude so that we will operate in concert with; this is cognitive dissonance. The aim of this study was evaluation of relation between decision making styles, personality traits and physiological components of cognitive dissonance and also offering a statistical model about them.Materials and Methods: In this correlation study, 130 students of Elmi-Karbordi University of Safadasht were invited and they were asked to complete Scott & Bruce Decision-Making Styles Questionnaire and Gray-Wilson Personality Questionnaire. Before and after distributing those questionnaires, their physiological conditions were receded. Cognitive dissonance was induced by writing about reducing amount of budget which deserved to orphans and rating the reduction of interest of lovely character that ignore his or her fans. Data analysis conducted through regression and multi vitiate covariance.Results: There were correlation between cognitive styles (Avoidant, dependent, logical and intuitive and also personality variables (Flight and Approach, active avoidance, Fight and Extinction with cognitive dissonance. The effect of cognitive (decision making styles and personality variables on physiological components was mediate indirectly through cognitive dissonance, in levels of P=0.01 and P=0.05 difference, was significant. Conclusion: Decision making styles and personality traits are related to cognitive dissonance and its physiological components, and also predict physiological components of cognitive dissonance.

  12. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.

  13. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions1

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten

    2016-01-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  14. Albumin-Bilirubin and Platelet-Albumin-Bilirubin Grades Accurately Predict Overall Survival in High-Risk Patients Undergoing Conventional Transarterial Chemoembolization for Hepatocellular Carcinoma.

    Science.gov (United States)

    Hansmann, Jan; Evers, Maximilian J; Bui, James T; Lokken, R Peter; Lipnik, Andrew J; Gaba, Ron C; Ray, Charles E

    2017-09-01

    To evaluate albumin-bilirubin (ALBI) and platelet-albumin-bilirubin (PALBI) grades in predicting overall survival in high-risk patients undergoing conventional transarterial chemoembolization for hepatocellular carcinoma (HCC). This single-center retrospective study included 180 high-risk patients (142 men, 59 y ± 9) between April 2007 and January 2015. Patients were considered high-risk based on laboratory abnormalities before the procedure (bilirubin > 2.0 mg/dL, albumin 1.2 mg/dL); presence of ascites, encephalopathy, portal vein thrombus, or transjugular intrahepatic portosystemic shunt; or Model for End-Stage Liver Disease score > 15. Serum albumin, bilirubin, and platelet values were used to determine ALBI and PALBI grades. Overall survival was stratified by ALBI and PALBI grades with substratification by Child-Pugh class (CPC) and Barcelona Liver Clinic Cancer (BCLC) stage using Kaplan-Meier analysis. C-index was used to determine discriminatory ability and survival prediction accuracy. Median survival for 79 ALBI grade 2 patients and 101 ALBI grade 3 patients was 20.3 and 10.7 months, respectively (P  .05). ALBI and PALBI grades are accurate survival metrics in high-risk patients undergoing conventional transarterial chemoembolization for HCC. Use of these scores allows for more refined survival stratification within CPC and BCLC stage. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  15. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    Science.gov (United States)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  16. Accurate electrostatic and van der Waals pull-in prediction for fully clamped nano/micro-beams using linear universal graphs of pull-in instability

    Science.gov (United States)

    Tahani, Masoud; Askari, Amir R.

    2014-09-01

    In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.

  17. A simple, fast, and accurate thermodynamic-based approach for transfer and prediction of gas chromatography retention times between columns and instruments Part III: Retention time prediction on target column.

    Science.gov (United States)

    Hou, Siyuan; Stevenson, Keisean A J M; Harynuk, James J

    2018-03-27

    This is the third part of a three-part series of papers. In Part I, we presented a method for determining the actual effective geometry of a reference column as well as the thermodynamic-based parameters of a set of probe compounds in an in-house mixture. Part II introduced an approach for estimating the actual effective geometry of a target column by collecting retention data of the same mixture of probe compounds on the target column and using their thermodynamic parameters, acquired on the reference column, as a bridge between both systems. Part III, presented here, demonstrates the retention time transfer and prediction from the reference column to the target column using experimental data for a separate mixture of compounds. To predict the retention time of a new compound, we first estimate its thermodynamic-based parameters on the reference column (using geometric parameters determined previously). The compound's retention time on a second column (of previously determined geometry) is then predicted. The models and the associated optimization algorithms were tested using simulated and experimental data. The accuracy of predicted retention times shows that the proposed approach is simple, fast, and accurate for retention time transfer and prediction between gas chromatography columns. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The four principles: can they be measured and do they predict ethical decision making?

    Science.gov (United States)

    Page, Katie

    2012-05-20

    The four principles of Beauchamp and Childress--autonomy, non-maleficence, beneficence and justice--have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  19. Exploring Best Practice Skills to Predict Uncertainties in Venture Capital Investment Decision-Making

    Science.gov (United States)

    Blum, David Arthur

    Algae biodiesel is the sole sustainable and abundant transportation fuel source that can replace petrol diesel use; however, high competition and economic uncertainties exist, influencing independent venture capital decision making. Technology, market, management, and government action uncertainties influence competition and economic uncertainties in the venture capital industry. The purpose of this qualitative case study was to identify the best practice skills at IVC firms to predict uncertainty between early and late funding stages. The basis of the study was real options theory, a framework used to evaluate and understand the economic and competition uncertainties inherent in natural resource investment and energy derived from plant-based oils. Data were collected from interviews of 24 venture capital partners based in the United States who invest in algae and other renewable energy solutions. Data were analyzed by coding and theme development interwoven with the conceptual framework. Eight themes emerged: (a) expected returns model, (b) due diligence, (c) invest in specific sectors, (d) reduced uncertainty-late stage, (e) coopetition, (f) portfolio firm relationships, (g) differentiation strategy, and (h) modeling uncertainty and best practice. The most noteworthy finding was that predicting uncertainty at the early stage was impractical; at the expansion and late funding stages, however, predicting uncertainty was possible. The implications of these findings will affect social change by providing independent venture capitalists with best practice skills to increase successful exits, lessen uncertainty, and encourage increased funding of renewable energy firms, contributing to cleaner and healthier communities throughout the United States..

  20. Planning versus action: Different decision-making processes predict plans to change one's diet versus actual dietary behavior.

    Science.gov (United States)

    Kiviniemi, Marc T; Brown-Kramer, Carolyn R

    2015-05-01

    Most health decision-making models posit that deciding to engage in a health behavior involves forming a behavioral intention which then leads to actual behavior. However, behavioral intentions and actual behavior may not be functionally equivalent. Two studies examined whether decision-making factors predicting dietary behaviors were the same as or distinct from those predicting intentions. Actual dietary behavior was proximally predicted by affective associations with the behavior. By contrast, behavioral intentions were predicted by cognitive beliefs about behaviors, with no contribution of affective associations. This dissociation has implications for understanding individual regulation of health behaviors and for behavior change interventions. © The Author(s) 2015.

  1. Differential extraction of endogenous and exogenous 25-OH-vitamin D from serum makes the accurate quantification in liquid chromatography-tandem mass spectrometry assays challenging.

    Science.gov (United States)

    Lankes, Ulrich; Elder, Peter A; Lewis, John G; George, Peter

    2015-01-01

    Extraction followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis is the method of choice when it comes to the accurate quantification of 25-OH-vitamin D in blood samples. It is generally assumed that the addition of exogenous internal standard allows for the determination of the endogenous analyte concentration. In this study we investigated the extraction properties of endogenous and exogenous 25-OH-vitamin D. Eight samples were used for the evaluation of the extraction procedure and 59 patients' samples for a method comparison. The methanol-to-sample ratio (v/v) and the sample-to-hexane ratio (v/v) were varied and the LC-MS/MS signals of endogenous 25-OH-vitamin D3, spiked 25-OH-vitamin D2 and internal standard of the extracts recorded. The optimized 'in-house' LC-MS/MS assay was compared to two automated chemiluminescence immunoassays from DiaSorin and Abbott. Mathematical analysis of the data revealed a differential extraction of endogenous 25-OH-vitamin D3, spiked 25-OH-vitamin D2 and non-equilibrated internal standard. Exogenous 25-OH-vitamin D can be measured accurately if a definite methanol-to-sample ratio is used. Endogenous 25-OH-vitamin D is affected by critical quantification issues due to a differential slope in the extraction profile. The actual 25-OH-vitamin D concentration can be one-third above the measured extractable concentration. Results confirm that the 'in-house' LC-MS/MS assay provides reproducible 25-OH-vitamin D results. Discordant concentrations of 25-OH-vitamin D from LC-MS/MS assays can be caused by selection of suboptimal extraction conditions. Furthermore, a different sample pretreatment or solvent extraction system may result in a different dissociation and extraction yield of endogenous 25-OH-vitamin D and therefore contribute to variations of LC-MS/MS results. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  2. Effects of subordinate feedback to the supervisor and participation in decision-making in the prediction of organizational support.

    Science.gov (United States)

    1992-03-01

    The present study tested the hypothesis that participation in decision-making (PDM) and perceived effectiveness of subordinate feedback to the supervisor would contribute unique variance in the prediction of perceptions of organizational support. In ...

  3. Predictable chaos: a review of the effects of emotions on attention, memory and decision making.

    Science.gov (United States)

    LeBlanc, Vicki R; McConnell, Meghan M; Monteiro, Sandra D

    2015-03-01

    Healthcare practice and education are highly emotional endeavors. While this is recognized by educators and researchers seeking to develop interventions aimed at improving wellness in health professionals and at providing them with skills to deal with emotional interpersonal situations, the field of health professions education has largely ignored the role that emotions play on cognitive processes. The purpose of this review is to provide an introduction to the broader field of emotions, with the goal of better understanding the integral relationship between emotions and cognitive processes. Individuals, at any given time, are in an emotional state. This emotional state influences how they perceive the world around them, what they recall from it, as well as the decisions they make. Rather than treating emotions as undesirable forces that wreak havoc on the rational being, the field of health professions education could be enriched by a greater understanding of how these emotions can shape cognitive processes in increasingly predictable ways.

  4. Emotion regulation and risk taking: predicting risky choice in deliberative decision making.

    Science.gov (United States)

    Panno, Angelo; Lauriola, Marco; Figner, Bernd

    2013-01-01

    Only very recently has research demonstrated that experimentally induced emotion regulation strategies (cognitive reappraisal and expressive suppression) affect risky choice (e.g., Heilman et al., 2010). However, it is unknown whether this effect also operates via habitual use of emotion regulation strategies in risky choice involving deliberative decision making. We investigated the role of habitual use of emotion regulation strategies in risky choice using the "cold" deliberative version of the Columbia Card Task (CCT; Figner et al., 2009). Fifty-three participants completed the Emotion Regulation Questionnaire (ERQ; Gross & John, 2003) and--one month later--the CCT and the PANAS. Greater habitual cognitive reappraisal use was related to increased risk taking, accompanied by decreased sensitivity to changes in probability and loss amount. Greater habitual expressive suppression use was related to decreased risk taking. The results show that habitual use of reappraisal and suppression strategies predict risk taking when decisions involve predominantly cognitive-deliberative processes.

  5. The Cognitive Processes underlying Affective Decision-making Predicting Adolescent Smoking Behaviors in a Longitudinal Study

    Directory of Open Access Journals (Sweden)

    Lin eXiao

    2013-10-01

    Full Text Available This study investigates the relationship between three different cognitive processes underlying the Iowa Gambling Task (IGT and adolescent smoking behaviors in a longitudinal study. We conducted a longitudinal study of 181 Chinese adolescents in Chengdu City, China. The participants were followed from 10th grade to 11th grade. When they were in the 10th grade (Time 1, we tested these adolescents’ decision-making using the Iowa Gambling Task and working memory capacity using the Self-ordered Pointing Test (SOPT. Self-report questionnaires were used to assess school academic performance and smoking behaviors. The same questionnaires were completed again at the one-year follow-up (Time 2. The Expectancy-Valence (EV Model was applied to distill the IGT performance into three different underlying psychological components: (i a motivational component which indicates the subjective weight the adolescents assign to gains versus losses; (ii a learning-rate component which indicates the sensitivity to recent outcomes versus past experiences; and (iii a response component which indicates how consistent the adolescents are between learning and responding. The subjective weight to gains vs. losses at Time 1 significantly predicted current smokers and current smoking levels at Time 2, controlling for demographic variables and baseline smoking behaviors. Therefore, by decomposing the IGT into three different psychological components, we found that the motivational process of weight gain vs. losses may serve as a neuropsychological marker to predict adolescent smoking behaviors in a general youth population.

  6. The four principles: Can they be measured and do they predict ethical decision making?

    Directory of Open Access Journals (Sweden)

    Page Katie

    2012-05-01

    Full Text Available Abstract Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  7. The four principles: Can they be measured and do they predict ethical decision making?

    Science.gov (United States)

    2012-01-01

    Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed. PMID:22606995

  8. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  9. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    Science.gov (United States)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  10. Alcohol levels do not accurately predict physical or mental impairment in ethanol-tolerant subjects: relevance to emergency medicine and dram shop laws.

    Science.gov (United States)

    Roberts, James R; Dollard, Denis

    2010-12-01

    The human body and the central nervous system can develop tremendous tolerance to ethanol. Mental and physical dysfunctions from ethanol, in an alcohol-tolerant individual, do not consistently correlate with ethanol levels traditionally used to define intoxication, or even lethality, in a nontolerant subject. Attempting to relate observed signs of alcohol intoxication or impairment, or to evaluate sobriety, by quantifying blood alcohol levels can be misleading, if not impossible. We report a case demonstrating the disconnect between alcohol levels and generally assigned parameters of intoxication and impairment. In this case, an alcohol-tolerant man, with a serum ethanol level of 515 mg/dl, appeared neurologically intact and cognitively normal. This individual was without objective signs of impairment or intoxication by repeated evaluations by experienced emergency physicians. In alcohol-tolerant individuals, blood alcohol levels cannot always be predicted by and do not necessarily correlate with outward appearance, overt signs of intoxication, or physical examination. This phenomenon must be acknowledged when analyzing medical decision making in the emergency department or when evaluating the ability of bartenders and party hosts to identify intoxication in dram shop cases.

  11. Decision support system in Predicting the Best teacher with Multi Atribute Decesion Making Weighted Product (MADMWP Method

    Directory of Open Access Journals (Sweden)

    Solikhun Solikhun

    2017-06-01

    Full Text Available Predicting of the best teacher in Indonesia aims to spur the development of the growth and improve the quality of the education. In this paper, the predicting  of the best teacher is implemented based on predefined criteria. To help the predicting process, a decision support system is needed. This paper employs Multi Atribute Decesion Making Weighted Product (MADMWP method. The result of this method is tested some teachers in  junior high school islamic boarding Al-Barokah school, Simalungun, North Sumatera, Indonesia. This system can be used to help in solving problems of the best teacher prediction.

  12. Making predictions in a changing world-inference, uncertainty, and learning.

    Science.gov (United States)

    O'Reilly, Jill X

    2013-01-01

    To function effectively, brains need to make predictions about their environment based on past experience, i.e., they need to learn about their environment. The algorithms by which learning occurs are of interest to neuroscientists, both in their own right (because they exist in the brain) and as a tool to model participants' incomplete knowledge of task parameters and hence, to better understand their behavior. This review focusses on a particular challenge for learning algorithms-how to match the rate at which they learn to the rate of change in the environment, so that they use as much observed data as possible whilst disregarding irrelevant, old observations. To do this algorithms must evaluate whether the environment is changing. We discuss the concepts of likelihood, priors and transition functions, and how these relate to change detection. We review expected and estimation uncertainty, and how these relate to change detection and learning rate. Finally, we consider the neural correlates of uncertainty and learning. We argue that the neural correlates of uncertainty bear a resemblance to neural systems that are active when agents actively explore their environments, suggesting that the mechanisms by which the rate of learning is set may be subject to top down control (in circumstances when agents actively seek new information) as well as bottom up control (by observations that imply change in the environment).

  13. To help, or not to help, that is not the only question: An investigation of the interplay of different factors to predict helping behavior in an accurate and effective way.

    OpenAIRE

    Urschler, David F.

    2016-01-01

    Previous research has shown that people’s willingness to help those in need is influenced by a multitude of factors (e.g., perceived dangerousness of a situation, cost-benefit analysis, attributions of responsibility, kinship, status, and culture). However, past research has often focused on single factors to predict helping intentions. Therefore, the present thesis examines the interplay of different factors in order to predict helping intentions in the most accurate and effective way. Th...

  14. Automatic evidence quality prediction to support evidence-based decision making.

    Science.gov (United States)

    Sarker, Abeed; Mollá, Diego; Paris, Cécile

    2015-06-01

    Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance

  15. Individual differences in bodily freezing predict emotional biases in decision making

    NARCIS (Netherlands)

    Ly, V.; Huys, Q.; Stins, J.F.; Roelofs, K.; Cools, R.

    2014-01-01

    Instrumental decision making has long been argued to be vulnerable to emotional responses. Literature on multiple decision making systems suggests that this emotional biasing might reflect effects of a system that regulates innately specified, evolutionarily preprogrammed responses. To test this

  16. East London Modified-Broset as Decision-Making Tool to Predict Seclusion in Psychiatric Intensive Care Units

    OpenAIRE

    Loi, Felice; Marlowe, Karl

    2017-01-01

    Seclusion is a last resort intervention for management of aggressive behavior in psychiatric settings. There is no current objective and practical decision-making instrument for seclusion use on psychiatric wards. Our aim was to test the predictive and discriminatory characteristics of the East London Modified-Broset (ELMB), to delineate its decision-making profile for seclusion of adult psychiatric patients, and second to benchmark it against the psychometric properties of the Broset Violenc...

  17. Predicting individual differences in decision-making process from signature movement styles: an illustrative study of leaders

    OpenAIRE

    Connors, Brenda L.; Rende, Richard; Colton, Timothy J.

    2013-01-01

    There has been a surge of interest in examining the utility of methods for capturing individual differences in decision-making style. We illustrate the potential offered by Movement Pattern Analysis (MPA), an observational methodology that has been used in business and by the US Department of Defense to record body movements that provide predictive insight into individual differences in decision-making motivations and actions. Twelve military officers participated in an intensive 2-h intervie...

  18. Predicting IT Governance Performance : A Method for Model-Based Decision Making

    OpenAIRE

    Simonsson, Mårten

    2008-01-01

    Contemporary enterprises are largely dependent on Information Technology (IT), which makes decision making on IT matters important. There are numerous issues that confuse IT decision making, including contradictive business needs, financial constraints, lack of communication between business and IT stakeholders and difficulty in understanding the often heterogeneous and integrated IT systems. The discipline of IT governance aims at providing the decision making structures, processes, and rela...

  19. The Predictive Accuracy of PREDICT : A Personalized Decision-Making Tool for Southeast Asian Women With Breast Cancer

    NARCIS (Netherlands)

    Wong, Hoong-Seam; Subramaniam, Shridevi; Alias, Zarifah; Taib, Nur Aishah; Ho, Gwo-Fuang; Ng, Char-Hong; Yip, Cheng-Har; Verkooijen, Helena M.; Hartman, Mikael; Bhoo Pathy, N

    Web-based prognostication tools may provide a simple and economically feasible option to aid prognostication and selection of chemotherapy in early breast cancers. We validated PREDICT, a free online breast cancer prognostication and treatment benefit tool, in a resource-limited setting. All 1480

  20. Interactions of age and cognitive functions in predicting decision making under risky conditions over the life span.

    Science.gov (United States)

    Brand, Matthias; Schiebener, Johannes

    2013-01-01

    Little is known about how normal healthy aging affects decision-making competence. In this study 538 participants (age 18-80 years) performed the Game of Dice Task (GDT). Subsamples also performed the Iowa Gambling Task as well as tasks measuring logical thinking and executive functions. In a moderated regression analysis, the significant interaction between age and executive components indicates that older participants with good executive functioning perform well on the GDT, while older participants with reduced executive functions make more risky choices. The same pattern emerges for the interaction of age and logical thinking. Results demonstrate that age and cognitive functions act in concert in predicting the decision-making performance.

  1. On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology.

    Directory of Open Access Journals (Sweden)

    Paul B Conn

    Full Text Available Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook's notion of an independent variable hull (IVH, developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models.

  2. Prediction of psychological functioning one year after the predictive test for Huntington's disease and impact of the test result on reproductive decision making.

    Science.gov (United States)

    Decruyenaere, M; Evers-Kiebooms, G; Boogaerts, A; Cassiman, J J; Cloostermans, T; Demyttenaere, K; Dom, R; Fryns, J P; Van den Berghe, H

    1996-01-01

    For people at risk for Huntington's disease, the anxiety and uncertainty about the future may be very burdensome and may be an obstacle to personal decision making about important life issues, for example, procreation. For some at risk persons, this situation is the reason for requesting predictive DNA testing. The aim of this paper is two-fold. First, we want to evaluate whether knowing one's carrier status reduces anxiety and uncertainty and whether it facilitates decision making about procreation. Second, we endeavour to identify pretest predictors of psychological adaptation one year after the predictive test (psychometric evaluation of general anxiety, depression level, and ego strength). The impact of the predictive test result was assessed in 53 subjects tested, using pre- and post-test psychometric measurement and self-report data of follow up interviews. Mean anxiety and depression levels were significantly decreased one year after a good test result; there was no significant change in the case of a bad test result. The mean personality profile, including ego strength, remained unchanged one year after the test. The study further shows that the test result had a definite impact on reproductive decision making. Stepwise multiple regression analyses were used to select the best predictors of the subject's post-test reactions. The results indicate that a careful evaluation of pretest ego strength, depression level, and coping strategies may be helpful in predicting post-test reactions, independently of the carrier status. Test result (carrier/ non-carrier), gender, and age did not significantly contribute to the prediction. About one third of the variance of post-test anxiety and depression level and more than half of the variance of ego strength was explained, implying that other psychological or social aspects should also be taken into account when predicting individual post-test reactions. PMID:8880572

  3. Preschool Teaching Students' Prediction of Decision Making Strategies and Academic Achievement on Learning Motivations

    Science.gov (United States)

    Acat, M. Bahaddin; Dereli, Esra

    2012-01-01

    The purpose of this study was to identify problems and motivation sources and strategies of decision-making of the students' attending preschool education teacher department, was to determine the relationship between learning motivation and strategies of decision-making, academic achievement of students, was to determine whether strategies of…

  4. Making smart social judgments takes time: infants' recruitment of goal information when generating action predictions.

    Science.gov (United States)

    Krogh-Jespersen, Sheila; Woodward, Amanda L

    2014-01-01

    Previous research has shown that young infants perceive others' actions as structured by goals. One open question is whether the recruitment of this understanding when predicting others' actions imposes a cognitive challenge for young infants. The current study explored infants' ability to utilize their knowledge of others' goals to rapidly predict future behavior in complex social environments and distinguish goal-directed actions from other kinds of movements. Fifteen-month-olds (N = 40) viewed videos of an actor engaged in either a goal-directed (grasping) or an ambiguous (brushing the back of her hand) action on a Tobii eye-tracker. At test, critical elements of the scene were changed and infants' predictive fixations were examined to determine whether they relied on goal information to anticipate the actor's future behavior. Results revealed that infants reliably generated goal-based visual predictions for the grasping action, but not for the back-of-hand behavior. Moreover, response latencies were longer for goal-based predictions than for location-based predictions, suggesting that goal-based predictions are cognitively taxing. Analyses of areas of interest indicated that heightened attention to the overall scene, as opposed to specific patterns of attention, was the critical indicator of successful judgments regarding an actor's future goal-directed behavior. These findings shed light on the processes that support "smart" social behavior in infants, as it may be a challenge for young infants to use information about others' intentions to inform rapid predictions.

  5. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    Science.gov (United States)

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  6. Accurate prediction of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising

    Science.gov (United States)

    Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-01-01

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195

  7. The accurate definition of metabolic volumes on 18F-FDG-PET before treatment allows the response to chemoradiotherapy to be predicted in the case of oesophagus cancers

    International Nuclear Information System (INIS)

    Hatt, M.; Cheze-Le Rest, C.; Visvikis, D.; Pradier, O.

    2011-01-01

    This study aims at assessing the possibility of prediction of the response of locally advanced oesophagus cancers, even before the beginning of treatment, by using metabolic volume measurements performed on 18 F-FDG PET images made before the treatment. Medical files of 50 patients have been analyzed. According to the observed responses, and to metabolic volume and Total Lesion Glycosis (TLG) values, it appears that the images allow the extraction of parameters, such as the TLG, which are criteria for the prediction of the therapeutic response. Short communication

  8. Predicting and understanding law-making with word vectors and an ensemble model.

    Science.gov (United States)

    Nay, John J

    2017-01-01

    Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction we scored each sentence of a bill with a language model that embeds legislative vocabulary into a high-dimensional, semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill's sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important variables predicting enactment.

  9. Cortical Brain Activity Reflecting Attentional Biasing Toward Reward-Predicting Cues Covaries with Economic Decision-Making Performance.

    Science.gov (United States)

    San Martín, René; Appelbaum, Lawrence G; Huettel, Scott A; Woldorff, Marty G

    2016-01-01

    Adaptive choice behavior depends critically on identifying and learning from outcome-predicting cues. We hypothesized that attention may be preferentially directed toward certain outcome-predicting cues. We studied this possibility by analyzing event-related potential (ERP) responses in humans during a probabilistic decision-making task. Participants viewed pairs of outcome-predicting visual cues and then chose to wager either a small (i.e., loss-minimizing) or large (i.e., gain-maximizing) amount of money. The cues were bilaterally presented, which allowed us to extract the relative neural responses to each cue by using a contralateral-versus-ipsilateral ERP contrast. We found an early lateralized ERP response, whose features matched the attention-shift-related N2pc component and whose amplitude scaled with the learned reward-predicting value of the cues as predicted by an attention-for-reward model. Consistently, we found a double dissociation involving the N2pc. Across participants, gain-maximization positively correlated with the N2pc amplitude to the most reliable gain-predicting cue, suggesting an attentional bias toward such cues. Conversely, loss-minimization was negatively correlated with the N2pc amplitude to the most reliable loss-predicting cue, suggesting an attentional avoidance toward such stimuli. These results indicate that learned stimulus-reward associations can influence rapid attention allocation, and that differences in this process are associated with individual differences in economic decision-making performance. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Losing a dime with a satisfied mind: positive affect predicts less search in sequential decision making.

    Science.gov (United States)

    von Helversen, Bettina; Mata, Rui

    2012-12-01

    We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved

  11. Application of numerical analysis technique to make up for pipe wall thinning prediction program

    International Nuclear Information System (INIS)

    Hwang, Kyeong Mo; Jin, Tae Eun; Park, Won; Oh, Dong Hoon

    2009-01-01

    Flow Accelerated Corrosion (FAC) leads to wall thinning of steel piping exposed to flowing water or wet steam. Experience has shown that FAC damage to piping at fossil and nuclear plants can lead to costly outages and repairs and can affect plant reliability and safety. CHEWORKS have been utilized in domestic nuclear plants as a predictive tool to assist FAC engineers in planning inspections and evaluating the inspection data to prevent piping failures caused by FAC. However, CHECWORKS may be occasionally left out local susceptible portions owing to predicting FAC damage by pipeline group after constructing a database for all secondary side piping in nuclear plants. This paper describes the methodologies that can complement CHECWORKS and the verifications of the CHECWORKS prediction results in terms of numerical analysis. FAC susceptible locations based on CHECWORKS for the two pipeline groups of a nuclear plant was compared with those of numerical analysis based on FLUENT.

  12. The Trail Making test: a study of its ability to predict falls in the acute neurological in-patient population.

    Science.gov (United States)

    Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane

    2018-05-01

    To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P Test data (Wilcoxon signed-rank P test of cognitive function, the Trail Making test.

  13. Human glycemic response curves after intake of carbohydrate foods are accurately predicted by combining in vitro gastrointestinal digestion with in silico kinetic modeling

    Directory of Open Access Journals (Sweden)

    Susann Bellmann

    2018-02-01

    Conclusion: Based on the demonstrated accuracy and predictive quality, this in vitro–in silico technology can be used for the testing of food products on their glycemic response under standardized conditions and may stimulate the production of (slow carbs for the prevention of metabolic diseases.

  14. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M

    2008-01-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding...

  15. Fecal Calprotectin is an Accurate Tool and Correlated to Seo Index in Prediction of Relapse in Iranian Patients With Ulcerative Colitis.

    Science.gov (United States)

    Hosseini, Seyed Vahid; Jafari, Peyman; Taghavi, Seyed Alireza; Safarpour, Ali Reza; Rezaianzadeh, Abbas; Moini, Maryam; Mehrabi, Manoosh

    2015-02-01

    The natural clinical course of Ulcerative Colitis (UC) is characterized by episodes of relapse and remission. Fecal Calprotectin (FC) is a relatively new marker of intestinal inflammation and is an available, non-expensive tool for predicting relapse of quiescent UC. The Seo colitis activity index is a clinical index for assessment of the severity of UC. The present study aimed to evaluate the accuracy of FC and the Seo colitis activity index and their correlation in prediction of UC exacerbation. In this prospective cohort study, 157 patients with clinical and endoscopic diagnosis of UC selected randomly from 1273 registered patients in Fars province's IBD registry center in Shiraz, Iran, were followed from October 2012 to October 2013 for 12 months or shorter, if they had a relapse. Two patients left the study before completion and one patient had relapse because of discontinuation of drugs. The participants' clinical and serum factors were evaluated every three months. Furthermore, stool samples were collected at the beginning of study and every three months and FC concentration (commercially available enzyme linked immunoassay) and the Seo Index were assessed. Then univariate analysis, multiple variable logistic regression, Receiver Operating Characteristics (ROC) curve analysis, and Pearson's correlation test (r) were used for statistical analysis of data. According to the results, 74 patients (48.1%) relapsed during the follow-up (33 men and 41 women). Mean ± SD of FC was 862.82 ± 655.97 μg/g and 163.19 ± 215.85 μg/g in relapsing and non-relapsing patients, respectively (P Seo index were significant predictors of relapse. ROC curve analysis of FC level and Seo activity index for prediction of relapse demonstrated area under the curve of 0.882 (P Seo index was significant in prediction of relapse (r = 0.63, P Seo activity index in prediction of relapse in the course of quiescent UC in Iranian patients.

  16. Optimization of decision making to avoid stochastically predicted air traffic conflicts

    Directory of Open Access Journals (Sweden)

    В.М. Васильєв

    2005-01-01

    Full Text Available  The method of decision-making optimization on planning an aircraft trajectory to avoid potential conflict with restricted minimal level of separation standard is proposed. Evaluation and monitoring the conflict probability are made using the probabilistic composite method.

  17. Making predictions of mangrove deforestation: a comparison of two methods in Kenya.

    Science.gov (United States)

    Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A

    2013-11-01

    Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.

  18. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    Science.gov (United States)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2017-09-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  19. Individual differences in decision making and reward processing predict changes in cannabis use: a prospective functional magnetic resonance imaging study.

    Science.gov (United States)

    Cousijn, Janna; Wiers, Reinout W; Ridderinkhof, K Richard; van den Brink, Wim; Veltman, Dick J; Porrino, Linda J; Goudriaan, Anna E

    2013-11-01

    Decision-making deficits are thought to play an important role in the development and persistence of substance use disorders. Individual differences in decision-making abilities and their underlying neurocircuitry may, therefore, constitute an important predictor for the course of substance use and the development of substance use disorders. Here, we investigate the predictive value of decision making and neural mechanisms underlying decision making for future cannabis use and problem severity in a sample of heavy cannabis users. Brain activity during a monetary decision-making task (Iowa gambling task) was compared between 32 heavy cannabis users and 41 matched non-using controls using functional magnetic resonance imaging. In addition, within the group of heavy cannabis users, associations were examined between task-related brain activations, cannabis use and cannabis use-related problems at baseline, and change in cannabis use and problem severity after a 6-month follow-up. Despite normal task performance, heavy cannabis users compared with controls showed higher activation during wins in core areas associated with decision making. Moreover, within the group of heavy cannabis users, win-related activity and activity anticipating loss outcomes in areas generally involved in executive functions predicted change in cannabis use after 6 months. These findings are consistent with previous studies and point to abnormal processing of motivational information in heavy cannabis users. A new finding is that individuals who are biased toward immediate rewards have a higher probability of increasing drug use, highlighting the importance of the relative balance between motivational processes and regulatory executive processes in the development of substance use disorders. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.

  20. Issues and Importance of "Good" Starting Points for Nonlinear Regression for Mathematical Modeling with Maple: Basic Model Fitting to Make Predictions with Oscillating Data

    Science.gov (United States)

    Fox, William

    2012-01-01

    The purpose of our modeling effort is to predict future outcomes. We assume the data collected are both accurate and relatively precise. For our oscillating data, we examined several mathematical modeling forms for predictions. We also examined both ignoring the oscillations as an important feature and including the oscillations as an important…

  1. PredPPCrys: accurate prediction of sequence cloning, protein production, purification and crystallization propensity from protein sequences using multi-step heterogeneous feature fusion and selection.

    Directory of Open Access Journals (Sweden)

    Huilin Wang

    Full Text Available X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed 'PredPPCrys' using the support vector machine (SVM. Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I. Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II, which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization

  2. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-01-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD 50 , and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP

  3. Measuring Personality in Context: Improving Predictive Accuracy in Selection Decision Making

    OpenAIRE

    Hoffner, Rebecca Ann

    2009-01-01

    This study examines the accuracy of a context-sensitive (i.e., goal dimensions) measure of personality compared to a traditional measure of personality (NEO-PI-R) and generalized self-efficacy (GSE) to predict variance in task performance. The goal dimensions measure takes a unique perspective in the conceptualization of personality. While traditional measures differentiate within person and collapse across context (e.g., Big Five), the goal dimensions measure employs a hierarchical structure...

  4. Building a Predictive Capability for Decision-Making that Supports MultiPEM

    Energy Technology Data Exchange (ETDEWEB)

    Carmichael, Joshua Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-20

    Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.

  5. Factors Influencing the Predictive Power of Models for Predicting Mortality and/or Heart Failure Hospitalization in Patients With Heart Failure

    NARCIS (Netherlands)

    Ouwerkerk, Wouter; Voors, Adriaan A.; Zwinderman, Aeilko H.

    2014-01-01

    The present paper systematically reviews and compares existing prediction models in order to establish the strongest variables, models, and model characteristics in patients with heart failure predicting outcome. To improve decision making accurately predicting mortality and heart-failure

  6. Benchmarking of density functionals for a soft but accurate prediction and assignment of (1) H and (13)C NMR chemical shifts in organic and biological molecules.

    Science.gov (United States)

    Benassi, Enrico

    2017-01-15

    A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Prefrontal spatial working memory network predicts animal's decision making in a free choice saccade task.

    Science.gov (United States)

    Mochizuki, Kei; Funahashi, Shintaro

    2016-01-01

    While neurons in the lateral prefrontal cortex (PFC) encode spatial information during the performance of working memory tasks, they are also known to participate in subjective behavior such as spatial attention and action selection. In the present study, we analyzed the activity of primate PFC neurons during the performance of a free choice memory-guided saccade task in which the monkeys needed to choose a saccade direction by themselves. In trials when the receptive field location was subsequently chosen by the animal, PFC neurons with spatially selective visual response started to show greater activation before cue onset. This result suggests that the fluctuation of firing before cue presentation prematurely biased the representation of a certain spatial location and eventually encouraged the subsequent choice of that location. In addition, modulation of the activity by the animal's choice was observed only in neurons with high sustainability of activation and was also dependent on the spatial configuration of the visual cues. These findings were consistent with known characteristics of PFC neurons in information maintenance in spatial working memory function. These results suggest that precue fluctuation of spatial representation was shared and enhanced through the working memory network in the PFC and could finally influence the animal's free choice of saccade direction. The present study revealed that the PFC plays an important role in decision making in a free choice condition and that the dynamics of decision making are constrained by the network architecture embedded in this cortical area. Copyright © 2016 the American Physiological Society.

  8. Prefrontal spatial working memory network predicts animal's decision making in a free choice saccade task

    Science.gov (United States)

    Mochizuki, Kei

    2015-01-01

    While neurons in the lateral prefrontal cortex (PFC) encode spatial information during the performance of working memory tasks, they are also known to participate in subjective behavior such as spatial attention and action selection. In the present study, we analyzed the activity of primate PFC neurons during the performance of a free choice memory-guided saccade task in which the monkeys needed to choose a saccade direction by themselves. In trials when the receptive field location was subsequently chosen by the animal, PFC neurons with spatially selective visual response started to show greater activation before cue onset. This result suggests that the fluctuation of firing before cue presentation prematurely biased the representation of a certain spatial location and eventually encouraged the subsequent choice of that location. In addition, modulation of the activity by the animal's choice was observed only in neurons with high sustainability of activation and was also dependent on the spatial configuration of the visual cues. These findings were consistent with known characteristics of PFC neurons in information maintenance in spatial working memory function. These results suggest that precue fluctuation of spatial representation was shared and enhanced through the working memory network in the PFC and could finally influence the animal's free choice of saccade direction. The present study revealed that the PFC plays an important role in decision making in a free choice condition and that the dynamics of decision making are constrained by the network architecture embedded in this cortical area. PMID:26490287

  9. Personality Makes a Difference: Attachment Orientation Moderates Theory of Planned Behavior Prediction of Cardiac Medication Adherence.

    Science.gov (United States)

    Peleg, Shira; Vilchinsky, Noa; Fisher, William A; Khaskia, Abed; Mosseri, Morris

    2017-12-01

    To achieve a comprehensive understanding of patients' adherence to medication following acute coronary syndrome (ACS), we assessed the possible moderating role played by attachment orientation on the effects of attitudes, subjective norms, and perceived behavioral control (PBC), as derived from the Theory of Planned Behavior (TPB; Ajzen, 1991), on intention and reported adherence. A prospective longitudinal design was employed. During hospitalization, ACS male patients (N = 106) completed a set of self-report questionnaires including sociodemographic variables, attachment orientation, and measures of TPB constructs. Six months post-discharge, 90 participants completed a questionnaire measuring adherence to medication. Attachment orientations moderated some of the predictions of the TPB model. PBC predicted intention and reported adherence, but these associations were found to be significant only among individuals with lower, as opposed to higher, attachment anxiety. The association between attitudes and intention was stronger among individuals with higher, as opposed to lower, attachment anxiety. Only among individuals with higher attachment avoidance, subjective norms were negatively associated with intention to take medication. Cognitive variables appear to explain both adherence intention and behavior, but differently, depending on individuals' attachment orientations. Integrating personality and cognitive models may prove effective in understanding patients' health behaviors. © 2016 Wiley Periodicals, Inc.

  10. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    Science.gov (United States)

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  11. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  12. What Money Can't Buy: Different Patterns in Decision Making About Sex and Money Predict Past Sexual Coercion Perpetration.

    Science.gov (United States)

    Carrier Emond, Fannie; Gagnon, Jean; Nolet, Kevin; Cyr, Gaëlle; Rouleau, Joanne-Lucine

    2018-02-01

    Self-reported impulsivity has been found to predict the perpetration of sexual coercion in both sexual offenders and male college students. Impulsivity can be conceptualized as a generalized lack of self-control (i.e., general perspective) or as a multifaceted construct that can vary from one context to the other (i.e., domain-specific perspective). Delay discounting, the tendency to prefer sooner smaller rewards over larger delayed rewards, is a measure of impulsive decision making. Recent sexual adaptations of delay discounting tasks can be used to test domain-specific assumptions. The present study used the UPPS-P impulsivity questionnaire, a standard money discounting task, and a sexual discounting task to predict past use of sexual coercion in a sample of 98 male college students. Results indicated that higher negative urgency scores, less impulsive money discounting, and more impulsive sexual discounting all predicted sexual coercion. Consistent with previous studies, sexuality was discounted more steeply than money by both perpetrators and non-perpetrators of sexual coercion, but this difference was twice as large in perpetrators compared to non-perpetrators. Our study identified three different predictors of sexual coercion in male college students: a broad tendency to act rashly under negative emotions, a specific difficulty to postpone sexual gratification, and a pattern of optimal non-sexual decision making. Results highlight the importance of using multiple measures, including sexuality-specific measures, to get a clear portrait of the links between impulsivity and sexual coercion.

  13. The information value of early career productivity in mathematics: a ROC analysis of prediction errors in bibliometricly informed decision making.

    Science.gov (United States)

    Lindahl, Jonas; Danell, Rickard

    The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty

  14. Making coarse grained polymer simulations quantitatively predictive for statics and dynamics

    Science.gov (United States)

    Kremer, Kurt

    2010-03-01

    By combining input from short simulation runs of rather small systems with all atomistic details together with properly adapted coarse grained models we are able quantitatively predict static and especially dynamical properties of both pure polymer melts of long fully entangled but also of systems with low molecular weight additives. Comparisons to rather different experiments such as diffusion constant measurements or NMR relaxation experiments show a remarkable quantitative agreement without any adjustable parameter. Reintroduction of chemical details into the coarse grained trajectories allows the study of long time trajectories in all atomistic detail providing the opportunity for rather different means of data analysis. References: V. Harmandaris, K. Kremer, Macromolecules, in press (2009) V. Harmandaris et al, Macromolecules, 40, 7026 (2007) B. Hess, S. Leon, N. van der Vegt, K. Kremer, Soft Matter 2, 409 (2006) D. Fritz et al, Soft Matter 5, 4556 (2009)

  15. The predictive value of microbiological findings on teeth, internal and external implant portions in clinical decision making.

    Science.gov (United States)

    Canullo, Luigi; Radovanović, Sandro; Delibasic, Boris; Blaya, Juan Antonio; Penarrocha, David; Rakic, Mia

    2017-05-01

    The primary aim of this study was to evaluate 23 pathogens associated with peri-implantitis at inner part of implant connections, in peri-implant and periodontal pockets between patients suffering peri-implantitis and participants with healthy peri-implant tissues; the secondary aim was to estimate the predictive value of microbiological profile in patients wearing dental implants using data mining methods. Fifty participants included in the present case─control study were scheduled for collection of plaque samples from the peri-implant pockets, internal connection, and periodontal pocket. Real-time polymerase chain reaction was performed to quantify 23 pathogens. Three predictive models were developed using C4.5 decision trees to estimate the predictive value of microbiological profile between three experimental sites. The final sample included 47 patients (22 healthy controls and 25 diseased cases), 90 implants (43 with healthy peri-implant tissues and 47 affected by peri-implantitis). Total and mean pathogen counts at inner portions of the implant connection, in peri-implant and periodontal pockets were generally increased in peri-implantitis patients when compared to healthy controls. The inner portion of the implant connection, the periodontal pocket and peri-implant pocket, respectively, presented a predictive value of microbiologic profile of 82.78%, 94.31%, and 97.5% of accuracy. This study showed that microbiological profile at all three experimental sites is differently characterized between patients suffering peri-implantitis and healthy controls. Data mining analysis identified Parvimonas micra as a highly accurate predictor of peri-implantitis when present in peri-implant pocket while this method generally seems to be promising for diagnosis of such complex infections. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Ordering decision-making methods on spare parts for a new aircraft fleet based on a two-sample prediction

    International Nuclear Information System (INIS)

    Yongquan, Sun; Xi, Chen; He, Ren; Yingchao, Jin; Quanwu, Liu

    2016-01-01

    Ordering decision-making on spare parts is crucial in maximizing aircraft utilization and minimizing total operating cost. Extensive researches on spare parts inventory management and optimal allocation could be found based on the amount of historical operation data or condition-monitoring data. However, it is challengeable to make an ordering decision on spare parts under the case of establishment of a fleet by introducing new aircraft with little historical data. In this paper, spare parts supporting policy and ordering decision-making policy for new aircraft fleet are analyzed firstly. Then two-sample predictions for a Weibull distribution and a Weibull process are incorporated into forecast of the first failure time and failure number during certain time period using Bayesian and classical method respectively, according to which the ordering time and ordering quantity for spare parts are identified. Finally, a case study is presented to illustrate the methods of identifying the ordering time and ordering number of engine-driven pumps through forecasting the failure time and failure number, followed by a discussion on the impact of various fleet sizes on prediction results. This method has the potential to decide the ordering time and quantity of spare parts when a new aircraft fleet is established. - Highlights: • A modeling framework of ordering spare parts for a new fleet is proposed. • Models for ordering time and number are established based on two-sample prediction. • The computation of future failure time is simplified using Newtonian binomial law. • Comparison of the first failure time PDFs is used to identify process parameters. • Identification methods for spare parts are validated by Engine Driven Pump case study.

  17. Predicting Individual Differences in Decision-Making Process From Signature Movement Styles: An Illustrative Study of Leaders

    Directory of Open Access Journals (Sweden)

    Brenda L. Connors

    2013-09-01

    Full Text Available There has been a surge of interest in examining the utility of methods for capturing individual differences in decision-making style. We illustrate the potential offered by Movement Pattern Analysis (MPA, an observational methodology that has been used in business and by the U.S. Department of Defense to record body movements that provide predictive insight into individual differences in decision-making motivations and actions. Twelve military officers participated in an intensive two-hour interview that permitted detailed and fine-grained observation and coding of signature movements by trained practitioners using MPA. Three months later, these subjects completed four hypothetical decision-making tasks in which the amount of information sought out before coming to a decision, as well as the time spent on the tasks, were under the partial control of the subject. A composite MPA indicator of how a person allocates decision-making actions and motivations to balance both Assertion (exertion of tangible movement effort on the environment to make something occur and Perspective (through movements that support shaping in the body to perceive and create a suitable environment for action was highly correlated with the total number of information draws and total response time – individuals high on Assertion reached for less information and had faster response times than those high on Perspective. Discussion focuses on the utility of using movement-based observational measures to capture individual differences in decision-making style and the implications for application in applied settings geared toward investigations of experienced leaders and world statesmen where individuality rules the day.

  18. Predicting individual differences in decision-making process from signature movement styles: an illustrative study of leaders.

    Science.gov (United States)

    Connors, Brenda L; Rende, Richard; Colton, Timothy J

    2013-01-01

    There has been a surge of interest in examining the utility of methods for capturing individual differences in decision-making style. We illustrate the potential offered by Movement Pattern Analysis (MPA), an observational methodology that has been used in business and by the US Department of Defense to record body movements that provide predictive insight into individual differences in decision-making motivations and actions. Twelve military officers participated in an intensive 2-h interview that permitted detailed and fine-grained observation and coding of signature movements by trained practitioners using MPA. Three months later, these subjects completed four hypothetical decision-making tasks in which the amount of information sought out before coming to a decision, as well as the time spent on the tasks, were under the partial control of the subject. A composite MPA indicator of how a person allocates decision-making actions and motivations to balance both Assertion (exertion of tangible movement effort on the environment to make something occur) and Perspective (through movements that support shaping in the body to perceive and create a suitable viewpoint for action) was highly correlated with the total number of information draws and total response time-individuals high on Assertion reached for less information and had faster response times than those high on Perspective. Discussion focuses on the utility of using movement-based observational measures to capture individual differences in decision-making style and the implications for application in applied settings geared toward investigations of experienced leaders and world statesmen where individuality rules the day.

  19. N0/N1, PNL, or LNR? The effect of lymph node number on accurate survival prediction in pancreatic ductal adenocarcinoma.

    Science.gov (United States)

    Valsangkar, Nakul P; Bush, Devon M; Michaelson, James S; Ferrone, Cristina R; Wargo, Jennifer A; Lillemoe, Keith D; Fernández-del Castillo, Carlos; Warshaw, Andrew L; Thayer, Sarah P

    2013-02-01

    We evaluated the prognostic accuracy of LN variables (N0/N1), numbers of positive lymph nodes (PLN), and lymph node ratio (LNR) in the context of the total number of examined lymph nodes (ELN). Patients from SEER and a single institution (MGH) were reviewed and survival analyses performed in subgroups based on numbers of ELN to calculate excess risk of death (hazard ratio, HR). In SEER and MGH, higher numbers of ELN improved the overall survival for N0 patients. The prognostic significance (N0/N1) and PLN were too variable as the importance of a single PLN depended on the total number of LN dissected. LNR consistently correlated with survival once a certain number of lymph nodes were dissected (≥13 in SEER and ≥17 in the MGH dataset). Better survival for N0 patients with increasing ELN likely represents improved staging. PLN have some predictive value but the ELN strongly influence their impact on survival, suggesting the need for a ratio-based classification. LNR strongly correlates with outcome provided that a certain number of lymph nodes is evaluated, suggesting that the prognostic accuracy of any LN variable depends on the total number of ELN.

  20. Accurate predictions of spectroscopic and molecular properties of 27 Λ-S and 73 Ω states of AsS radical

    Science.gov (United States)

    Shi, Deheng; Song, Ziyue; Niu, Xianghong; Sun, Jinfeng; Zhu, Zunlue

    2016-01-01

    The PECs are calculated for the 27 Λ-S states and their corresponding 73 Ω states of AsS radical. Of these Λ-S states, only the 22Δ and 54Π states are replulsive. The 12Σ+, 22Σ+, 42Π, 34Δ, 34Σ+, and 44Π states possess double wells. The 32Σ+ state possesses three wells. The A2Π, 32Π, 12Φ, 24Π, 34Π, 24Δ, 34Δ, 16Σ+, and 16Π states are inverted with the SO coupling effect included. The 14Σ+, 24Σ+, 24Σ-, 24Δ, 14Φ, 16Σ+, and 16Π states, the second wells of 12Σ+, 34Σ+, 42Π, 44Π, and 34Δ states, and the third well of 32Σ+ state are very weakly-bound states. The PECs are extrapolated to the CBS limit. The effect of SO coupling on the PECs is discussed. The spectroscopic parameters are evaluated, and compared with available measurements and other theoretical ones. The vibrational properties of several weakly-bound states are determined. The spectroscopic properties reported here can be expected to be reliably predicted ones.

  1. Eradicating BVD, reviewing Irish programme data and model predictions to support prospective decision making.

    Science.gov (United States)

    Thulke, H-H; Lange, M; Tratalos, J A; Clegg, T A; McGrath, G; O'Grady, L; O'Sullivan, P; Doherty, M L; Graham, D A; More, S J

    2018-02-01

    Bovine Viral Diarrhoea is an infectious production disease of major importance in many cattle sectors of the world. The infection is predominantly transmitted by animal contact. Postnatal infections are transient, leading to immunologically protected cattle. However, for a certain window of pregnancy, in utero infection of the foetus results in persistently infected (PI) calves being the major risk of BVD spread, but also an efficient target for controlling the infection. There are two acknowledged strategies to identify PI animals for removal: tissue tag testing (direct; also known as the Swiss model) and serological screening (indirect by interpreting the serological status of the herd; the Scandinavian model). Both strategies are effective in reducing PI prevalence and herd incidence. During the first four years of the Irish national BVD eradication programme (2013-16), it has been mandatory for all newborn calves to be tested using tissue tag testing. During this period, PI incidence has substantially declined. In recent times, there has been interest among stakeholders in a change to an indirect testing strategy, with potential benefit to the overall programme, particularly with respect to cost to farmers. Advice was sought on the usefulness of implementing the necessary changes. Here we review available data from the national eradication programme and strategy performance predictions from an expert system model to quantify expected benefits of the strategy change from strategic, budgetary and implementation points of view. Key findings from our work include (i) drawbacks associated with changes to programme implementation, in particular the loss of epidemiological information to allow real-time monitoring of eradication progress or to reliably predict time to eradication, (ii) the fact that only 25% of the herds in the Irish cattle sector (14% beef, 78% dairy herds) would benefit financially from a change to serosurveillance, with half of these participants

  2. Accurate predictions of spectroscopic and molecular properties of 27 Λ-S and 73 Ω states of AsS radical.

    Science.gov (United States)

    Shi, Deheng; Song, Ziyue; Niu, Xianghong; Sun, Jinfeng; Zhu, Zunlue

    2016-01-15

    The PECs are calculated for the 27 Λ-S states and their corresponding 73 Ω states of AsS radical. Of these Λ-S states, only the 2(2)Δ and 5(4)Π states are replulsive. The 1(2)Σ(+), 2(2)Σ(+), 4(2)Π, 3(4)Δ, 3(4)Σ(+), and 4(4)Π states possess double wells. The 3(2)Σ(+) state possesses three wells. The A(2)Π, 3(2)Π, 1(2)Φ, 2(4)Π, 3(4)Π, 2(4)Δ, 3(4)Δ, 1(6)Σ(+), and 1(6)Π states are inverted with the SO coupling effect included. The 1(4)Σ(+), 2(4)Σ(+), 2(4)Σ(-), 2(4)Δ, 1(4)Φ, 1(6)Σ(+), and 1(6)Π states, the second wells of 1(2)Σ(+), 3(4)Σ(+), 4(2)Π, 4(4)Π, and 3(4)Δ states, and the third well of 3(2)Σ(+) state are very weakly-bound states. The PECs are extrapolated to the CBS limit. The effect of SO coupling on the PECs is discussed. The spectroscopic parameters are evaluated, and compared with available measurements and other theoretical ones. The vibrational properties of several weakly-bound states are determined. The spectroscopic properties reported here can be expected to be reliably predicted ones. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The M. D. Anderson Symptom Inventory-Head and Neck Module, a Patient-Reported Outcome Instrument, Accurately Predicts the Severity of Radiation-Induced Mucositis

    International Nuclear Information System (INIS)

    Rosenthal, David I.; Mendoza, Tito R.; Chambers, Mark; Burkett, V. Shannon; Garden, Adam S.; Hessell, Amy C.; Lewin, Jan S.; Ang, K. Kian; Kies, Merrill S.; Gning, Ibrahima; Wang, Xin S.; Cleeland, Charles S.

    2008-01-01

    Purpose: To compare the M. D. Anderson Symptom Inventory-Head and Neck (MDASI-HN) module, a symptom burden instrument, with the Functional Assessment of Cancer Therapy-Head and Neck (FACT-HN) module, a quality-of-life instrument, for the assessment of mucositis in patients with head-and-neck cancer treated with radiotherapy and to identify the most distressing symptoms from the patient's perspective. Methods and Materials: Consecutive patients with head-and-neck cancer (n = 134) completed the MDASI-HN and FACT-HN before radiotherapy (time 1) and after 6 weeks of radiotherapy or chemoradiotherapy (time 2). The mean global and subscale scores for each instrument were compared with the objective mucositis scores determined from the National Cancer Institute Common Terminology Criteria for Adverse Events, version 3.0. Results: The global and subscale scores for each instrument showed highly significant changes from time 1 to time 2 and a significant correlation with the objective mucositis scores at time 2. Only the MDASI scores, however, were significant predictors of objective Common Terminology Criteria for Adverse Events mucositis scores on multivariate regression analysis (standardized regression coefficient, 0.355 for the global score and 0.310 for the head-and-neck cancer-specific score). Most of the moderate and severe symptoms associated with mucositis as identified on the MDASI-HN are not present on the FACT-HN. Conclusion: Both the MDASI-HN and FACT-HN modules can predict the mucositis scores. However, the MDASI-HN, a symptom burden instrument, was more closely associated with the severity of radiation-induced mucositis than the FACT-HN on multivariate regression analysis. This greater association was most likely related to the inclusion of a greater number of face-valid mucositis-related items in the MDASI-HN compared with the FACT-HN

  4. Dead certain: confidence and conservatism predict aggression in simulated international crisis decision-making.

    Science.gov (United States)

    Johnson, Dominic D P; McDermott, Rose; Cowden, Jon; Tingley, Dustin

    2012-03-01

    Evolutionary psychologists have suggested that confidence and conservatism promoted aggression in our ancestral past, and that this may have been an adaptive strategy given the prevailing costs and benefits of conflict. However, in modern environments, where the costs and benefits of conflict can be very different owing to the involvement of mass armies, sophisticated technology, and remote leadership, evolved tendencies toward high levels of confidence and conservatism may continue to be a contributory cause of aggression despite leading to greater costs and fewer benefits. The purpose of this paper is to test whether confidence and conservatism are indeed associated with greater levels of aggression-in an explicitly political domain. We present the results of an experiment examining people's levels of aggression in response to hypothetical international crises (a hostage crisis, a counter-insurgency campaign, and a coup). Levels of aggression (which range from concession to negotiation to military attack) were significantly predicted by subjects' (1) confidence that their chosen policy would succeed, (2) score on a liberal-conservative scale, (3) political party affiliation, and (4) preference for the use of military force in real-world U.S. policy toward Iraq and Iran. We discuss the possible adaptive and maladaptive implications of confidence and conservatism for the prospects of war and peace in the modern world.

  5. Do you use your head or follow your heart? Self-location predicts personality, emotion, decision making, and performance.

    Science.gov (United States)

    Fetterman, Adam K; Robinson, Michael D

    2013-08-01

    The head is thought to be rational and cold, whereas the heart is thought to be emotional and warm. In 8 studies (total N = 725), we pursued the idea that such body metaphors are widely consequential. Study 1 introduced a novel individual difference variable, one asking people to locate the self in the head or the heart. Irrespective of sex differences, head-locators characterized themselves as rational, logical, and interpersonally cold, whereas heart-locators characterized themselves as emotional, feminine, and interpersonally warm (Studies 1-3). Study 4 showed that head-locators were more accurate in answering general knowledge questions and had higher grade point averages, and Study 5 showed that heart-locators were more likely to favor emotional over rational considerations in moral decision making. Study 6 linked self-locations to reactivity phenomena in daily life--for example, heart-locators experienced greater negative emotion on high stressor days. In Study 7, we manipulated attention to the head versus the heart and found that head-pointing facilitated intellectual performance, whereas heart-pointing led to emotional decision making. Study 8 replicated Study 3's findings with a nearly year-long delay between the self-location and outcome measures. The findings converge on the importance of head-heart metaphors for understanding individual differences in cognition, emotion, and performance.

  6. Happy classes make happy students: Classmates' well-being predicts individual student well-being.

    Science.gov (United States)

    King, Ronnel B; Datu, Jesus Alfonso

    2017-12-01

    Student well-being has mostly been studied as an individual phenomenon with little research investigating how the well-being of one's classmates could influence a student's well-being. The aim of the current study was to examine how the aggregate well-being of students who comprise a class could predict students' subsequent well-being (Time 2 well-being) after controlling for the effects of prior well-being (Time 1 well-being) as well as key demographic variables such as gender and age. Two studies among Filipino secondary school students were conducted. In Study 1, 788 students from 21 classes participated; in Study 2, 404 students from 10 classes participated. For Study 1, questionnaires assessing students' life satisfaction, positive affect and negative affect were administered twice seven months apart. For Study 2, the well-being questionnaires were administered twice, three months apart. Hierarchical linear modeling was used with level 1 (Time 1 individual well-being, gender, and age) and level 2 (class well-being) predictors. Results across the two studies provided converging lines of evidence: students who were in classes with higher levels of life satisfaction and positive affect were also more likely to have higher life satisfaction and positive affect at Time 2. The study indicated that the well-being of a student partly depends on the well-being of their classmates providing evidence for the social contagion of well-being in the classroom context. Copyright © 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  7. Informed decision making about predictive DNA tests: arguments for more public visibility of personal deliberations about the good life.

    Science.gov (United States)

    Boenink, Marianne; van der Burg, Simone

    2010-05-01

    Since its advent, predictive DNA testing has been perceived as a technology that may have considerable impact on the quality of people's life. The decision whether or not to use this technology is up to the individual client. However, to enable well considered decision making both the negative as well as the positive freedom of the individual should be supported. In this paper, we argue that current professional and public discourse on predictive DNA-testing is lacking when it comes to supporting positive freedom, because it is usually framed in terms of risk and risk management. We show how this 'risk discourse' steers thinking on the good life in a particular way. We go on to argue that empirical research into the actual deliberation and decision making processes of individuals and families may be used to enrich the environment of personal deliberation in three ways: (1) it points at a richer set of values that deliberators can take into account, (2) it acknowledges the shared nature of genes, and (3) it shows how one might frame decisions in a non-binary way. We argue that the public sharing and discussing of stories about personal deliberations offers valuable input for others who face similar choices: it fosters their positive freedom to shape their view of the good life in relation to DNA-diagnostics. We conclude by offering some suggestions as to how to realize such public sharing of personal stories.

  8. Fate and Prediction of Phenolic Secoiridoid Compounds throughout the Different Stages of the Virgin Olive Oil Making Process.

    Science.gov (United States)

    Fregapane, Giuseppe; Salvador, M Desamparados

    2017-08-03

    The evolution of the main phenolic secoiridoid compounds throughout the different stages of the virgin olive oil making process-crushing, malaxation and liquid-solid separation-is studied here, with the goal of making possible the prediction of the partition and transformation that take place in the different steps of the process. The concentration of hydroxytyrosol secoiridoids produced under the different crushing conditions studied are reasonably proportional to the intensity of the milling stage, and strongly depend on the olive variety processed. During malaxation, the content of the main phenolic secoiridoids is reduced, especially in the case of the hydroxytyrosol derivatives, in which a variety-dependent behaviour is observed. The prediction of the concentration of phenolic secoiridoids finally transferred from the kneaded paste to the virgin olive oil is also feasible, and depends on the phenolic content and amount of water in the olive paste. The determination of the phenolic compounds in the olive fruit, olive paste and olive oil has been carried out by LC-MS (Liquid-Chromatography Mass-Spectrometry). This improved knowledge could help in the use of more adequate processing conditions for the production of virgin olive oil with desired properties; for example, higher or lower phenolic content, as the amount of these minor components is directly related to its sensory, antioxidant and healthy properties.

  9. On the use and potential use of seasonal to decadal climate predictions for decision-making in Europe

    Science.gov (United States)

    Soares, Marta Bruno; Dessai, Suraje

    2014-05-01

    The need for climate information to help inform decision-making in sectors susceptible to climate events and impacts is widely recognised. In Europe, developments in the science and models underpinning the study of climate variability and change have led to an increased interest in seasonal to decadal climate predictions (S2DCP). While seasonal climate forecasts are now routinely produced operationally by a number of centres around the world, decadal climate predictions are still in its infancy restricted to the realm of research. Contrary to other regions of the world, where the use of these types of forecasts, particularly at seasonal timescales, has been pursued in recent years due to higher levels of predictability, little is known about the uptake and climate information needs of end-users regarding S2DCP in Europe. To fill this gap we conducted in-depth interviews with experts and decision-makers across a range of European sectors, a workshop with European climate services providers, and a systematic literature review on the use of S2DCP in Europe. This study is part of the EUropean Provision Of Regional Impact Assessment on a Seasonal-to-decadal timescale (EUPORIAS) project which aims to develop semi-operational prototypes of impact prediction systems in Europe on seasonal to decadal timescales. We found that the emerging landscape of users and potential users of S2DCP in Europe is complex and heterogeneous. Differences in S2DCP information needs across and within organisations and sectors are largely underpinned by factors such as the institutional and regulatory context of the organisations, the plethora of activities and decision-making processes involved, the level of expertise and capacity of the users, and the availability of resources within the organisations. In addition, although the use of S2DCP across Europe is still fairly limited, particular sectors such as agriculture, health, energy, water, (re)insurance, and transport are taking the lead on

  10. Multicriteria decision-making analysis based methodology for predicting carbonate rocks' uniaxial compressive strength

    Directory of Open Access Journals (Sweden)

    Ersoy Hakan

    2012-10-01

    Full Text Available

    ABSTRACT

    Uniaxial compressive strength (UCS deals with materials' to ability to withstand axially-directed pushing forces and especially considered to be rock materials' most important mechanical properties. However, the UCS test is an expensive, very time-consuming test to perform in the laboratory and requires high-quality core samples having regular geometry. Empirical equations were thus proposed for predicting UCS as a function of rocks' index properties. Analytical hierarchy process and multiple regression analysis based methodology were used (as opposed to traditional linear regression methods on data-sets obtained from carbonate rocks in NE Turkey. Limestone samples ranging from Devonian to late Cretaceous ages were chosen; travertine-onyx samples were selected from morphological environments considering their surface environmental conditions Test results from experiments carried out on about 250 carbonate rock samples were used in deriving the model. While the hierarchy model focused on determining the most important index properties affecting on UCS, regression analysis established meaningful relationships between UCS and index properties; 0. 85 and 0. 83 positive coefficient correlations between the variables were determined by regression analysis. The methodology provided an appropriate alternative to quantitative estimation of UCS and avoided the need for tedious and time consuming laboratory testing


    RESUMEN

    La resistencia a la compresión uniaxial (RCU trata con la capacidad de los materiales para soportar fuerzas empujantes dirigidas axialmente y, especialmente, es considerada ser uno de las más importantes propiedades mecánicas de

  11. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    Science.gov (United States)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  12. Statistical analysis of accurate prediction of local atmospheric optical attenuation with a new model according to weather together with beam wandering compensation system: a season-wise experimental investigation

    Science.gov (United States)

    Arockia Bazil Raj, A.; Padmavathi, S.

    2016-07-01

    Atmospheric parameters strongly affect the performance of Free Space Optical Communication (FSOC) system when the optical wave is propagating through the inhomogeneous turbulent medium. Developing a model to get an accurate prediction of optical attenuation according to meteorological parameters becomes significant to understand the behaviour of FSOC channel during different seasons. A dedicated free space optical link experimental set-up is developed for the range of 0.5 km at an altitude of 15.25 m. The diurnal profile of received power and corresponding meteorological parameters are continuously measured using the developed optoelectronic assembly and weather station, respectively, and stored in a data logging computer. Measured meteorological parameters (as input factors) and optical attenuation (as response factor) of size [177147 × 4] are used for linear regression analysis and to design the mathematical model that is more suitable to predict the atmospheric optical attenuation at our test field. A model that exhibits the R2 value of 98.76% and average percentage deviation of 1.59% is considered for practical implementation. The prediction accuracy of the proposed model is investigated along with the comparative results obtained from some of the existing models in terms of Root Mean Square Error (RMSE) during different local seasons in one-year period. The average RMSE value of 0.043-dB/km is obtained in the longer range dynamic of meteorological parameters variations.

  13. Decision-Making Involvement and Prediction of Adherence in Youth With Type 1 Diabetes: A Cohort Sequential Study.

    Science.gov (United States)

    Miller, Victoria A; Jawad, Abbas F

    2018-05-17

    To assess developmental trajectories of decision-making involvement (DMI), defined as the ways in which parents and children engage each other in decision-making about illness management, in youth with type 1 diabetes (T1D) and examine the effects of DMI on levels of and changes in adherence with age. Participants included 117 youth with T1D, enrolled at ages 8-16 years and assessed five times over 2 years. The cohort sequential design allowed for the approximation of the longitudinal curve from age 8 to 19 from overlapping cohort segments. Children and parents completed the Decision-Making Involvement Scale, which yields subscales for different aspects of DMI, and a self-report adherence questionnaire. Mixed-effects growth curve modeling was used for analysis, with longitudinal measures nested within participant and participants nested within cohort. Most aspects of DMI (Parent Express, Parent Seek, Child Express, and Joint) increased with child age; scores on some child report subscales (Parent Express, Child Seek, and Joint) decreased after age 12-14 years. After accounting for age, Child Seek, Child Express, and Joint were associated with overall higher levels of adherence in both child (estimates = 0.08-0.13, p < .001) and parent (estimates = 0.07- 0.13, p < .01) report models, but they did not predict changes in adherence with age. These data suggest that helping children to be more proactive in T1D discussions, by encouraging them to express their opinions, share information, and solicit guidance from parents, is a potential target for interventions to enhance effective self-management.

  14. A predictive model of nuclear power plant crew decision-making and performance in a dynamic simulation environment

    Science.gov (United States)

    Coyne, Kevin Anthony

    The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional

  15. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  16. Stonehenge: A Simple and Accurate Predictor of Lunar Eclipses

    Science.gov (United States)

    Challener, S.

    1999-12-01

    Over the last century, much has been written about the astronomical significance of Stonehenge. The rage peaked in the mid to late 1960s when new computer technology enabled astronomers to make the first complete search for celestial alignments. Because there are hundreds of rocks or holes at Stonehenge and dozens of bright objects in the sky, the quest was fraught with obvious statistical problems. A storm of controversy followed and the subject nearly vanished from print. Only a handful of these alignments remain compelling. Today, few astronomers and still fewer archaeologists would argue that Stonehenge served primarily as an observatory. Instead, Stonehenge probably served as a sacred meeting place, which was consecrated by certain celestial events. These would include the sun's risings and settings at the solstices and possibly some lunar risings as well. I suggest that Stonehenge was also used to predict lunar eclipses. While Hawkins and Hoyle also suggested that Stonehenge was used in this way, their methods are complex and they make use of only early, minor, or outlying areas of Stonehenge. In contrast, I suggest a way that makes use of the imposing, central region of Stonehenge; the area built during the final phase of activity. To predict every lunar eclipse without predicting eclipses that do not occur, I use the less familiar lunar cycle of 47 lunar months. By moving markers about the Sarsen Circle, the Bluestone Circle, and the Bluestone Horseshoe, all umbral lunar eclipses can be predicted accurately.

  17. Risky decision-making predicts short-term outcome of community but not residential treatment for opiate addiction. Implications for case management.

    Science.gov (United States)

    Passetti, F; Clark, L; Davis, P; Mehta, M A; White, S; Checinski, K; King, M; Abou-Saleh, M

    2011-10-01

    Opiate addiction is associated with decision-making deficits and we previously showed that the extent of these impairments predicts aspects of treatment outcome. Here we aimed to establish whether measures of decision-making performance might be used to inform placement matching. Two groups of opiate dependent individuals, one receiving treatment in a community setting (n=48) and one in a residential setting (n=32) were administered computerised tests of decision-making, impulsivity and planning shortly after the beginning of treatment, to be followed up three months into each programme. In the community sample, performance on the decision-making tasks at initial assessment predicted abstinence from illicit drugs at follow-up. In contrast, in the residential sample there was no relationship between decision-making and clinical outcome. Intact decision-making processes appear to be necessary for upholding a resolve to avoid taking drugs in a community setting, but the importance of these mechanisms may be attenuated in a residential treatment setting. The results support the placement matching hypothesis, suggesting that individuals with more prominent decision-making deficits may particularly benefit from treatment in a residential setting and from the inclusion of aspects of cognitive rehabilitation in their treatment programme. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. What kinds of fish stock predictions do we need and what kinds of information will help us to make better predictions?

    Directory of Open Access Journals (Sweden)

    Keith Brander

    2003-04-01

    Full Text Available Fish stock predictions are used to guide fisheries management, but stocks continue to be over-exploited. Traditional single-species age-structured stock assessment models, which became an operational component of fisheries management in the 1950s, ignore biological and environmental effects. As our knowledge of the marine environment improves and our concern about the state of the marine ecosystem and about global change increases, the scope of our models needs to be widened. We need different kinds of predictions as well as better predictions. Population characteristics (rates of mortality, growth, recruitment of 61 stocks of 17 species of NE Atlantic fish are reviewed in order to consider the implications for the time-scale and quality of stock predictions. Short life expectancy limits the time horizon for predictability based on the current fishable stock and predictions are therefore more dependent on estimates or assumptions about future rates. Evidence is presented that rates of growth and recruitment are influenced by environmental factors and possibilities for including new information are explored in order to improve predictions.

  19. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  20. Accuracy of 'My Gut Feeling:' Comparing System 1 to System 2 Decision-Making for Acuity Prediction, Disposition and Diagnosis in an Academic Emergency Department.

    Science.gov (United States)

    Cabrera, Daniel; Thomas, Jonathan F; Wiswell, Jeffrey L; Walston, James M; Anderson, Joel R; Hess, Erik P; Bellolio, M Fernanda

    2015-09-01

    Current cognitive sciences describe decision-making using the dual-process theory, where a System 1 is intuitive and a System 2 decision is hypothetico-deductive. We aim to compare the performance of these systems in determining patient acuity, disposition and diagnosis. Prospective observational study of emergency physicians assessing patients in the emergency department of an academic center. Physicians were provided the patient's chief complaint and vital signs and allowed to observe the patient briefly. They were then asked to predict acuity, final disposition (home, intensive care unit (ICU), non-ICU bed) and diagnosis. A patient was classified as sick by the investigators using previously published objective criteria. We obtained 662 observations from 289 patients. For acuity, the observers had a sensitivity of 73.9% (95% CI [67.7-79.5%]), specificity 83.3% (95% CI [79.5-86.7%]), positive predictive value 70.3% (95% CI [64.1-75.9%]) and negative predictive value 85.7% (95% CI [82.0-88.9%]). For final disposition, the observers made a correct prediction in 80.8% (95% CI [76.1-85.0%]) of the cases. For ICU admission, emergency physicians had a sensitivity of 33.9% (95% CI [22.1-47.4%]) and a specificity of 96.9% (95% CI [94.0-98.7%]). The correct diagnosis was made 54% of the time with the limited data available. System 1 decision-making based on limited information had a sensitivity close to 80% for acuity and disposition prediction, but the performance was lower for predicting ICU admission and diagnosis. System 1 decision-making appears insufficient for final decisions in these domains but likely provides a cognitive framework for System 2 decision-making.

  1. Individual differences in decision making and reward processing predict changes in cannabis use: a prospective functional magnetic resonance imaging study

    NARCIS (Netherlands)

    Cousijn, J.; Wiers, R.W.; Ridderinkhof, K.R.; van den Brink, W.; Veltman, D.J.; Porrino, L.J.; Goudriaan, A.E.

    2013-01-01

    Decision-making deficits are thought to play an important role in the development and persistence of substance use disorders. Individual differences in decision-making abilities and their underlying neurocircuitry may, therefore, constitute an important predictor for the course of substance use and

  2. When does meaning making predict subjective well-being? Examining young and older adults in two cultures.

    Science.gov (United States)

    Alea, Nicole; Bluck, Susan

    2013-01-01

    Two studies in different cultures (Study 1: USA, N=174, Study 2: Trinidad, N=167) examined whether meaning making, (i.e., both searching for meaning, and directing behaviour) is positively related to subjective well-being (SWB) by age (younger, older adults). In both studies, participants self-reported engagement in meaning making, and SWB (e.g., affect, future time perspective, psychological well-being). In Study 1, young Americans (compared to older) more frequently used their past to direct behaviour but doing so was unrelated to SWB. In older Americans, both types of meaning making were positively associated with SWB. In Study 2, Trinidadian younger adults were again more likely than older adults to engage in meaning making. Unlike in the American sample, however, directing behaviour was positively related to SWB for both young and older adults. The studies demonstrate that whether meaning making shows benefits for SWB may depend on type of meaning, age and culture. Note that although meaning making was sometimes unrelated to SWB, no detrimental relations to meaning making were found. The discussion focuses on the role of moderators in understanding when meaning making should lead to benefits versus costs to SWB.

  3. Principalship in an Indonesian School Context: Can Principal Decision-Making Styles Significantly Predict Teacher Job Satisfaction?

    Science.gov (United States)

    Hariri, Hasan; Monypenny, Richard; Prideaux, Murray

    2012-01-01

    This paper examines relationships between teacher-perceived principal decision-making styles and teacher job satisfaction in schools in Lampung Province, Indonesia. We use the General Decision-making Style instrument, the Job Satisfaction Survey and a demographic questionnaire developed for this study. Our findings show that: 12 out of the 15…

  4. Decision-Making Under Risk, but Not Under Ambiguity, Predicts Pathological Gambling in Discrete Types of Abstinent Substance Users.

    Science.gov (United States)

    Wilson, Michael J; Vassileva, Jasmin

    2018-01-01

    This study explored how different forms of reward-based decision-making are associated with pathological gambling (PG) among abstinent individuals with prior dependence on different classes of drugs. Participants had lifetime histories of either "pure" heroin dependence ( n = 64), "pure" amphetamine dependence ( n = 51), or polysubstance dependence ( n = 89), or had no history of substance dependence ( n = 133). Decision-making was assessed via two neurocognitive tasks: (1) the Iowa Gambling Task (IGT), a measure of decision-making under ambiguity (i.e., uncertain risk contingencies); and (2) the Cambridge Gambling task (CGT), a measure of decision-making under risk (i.e., explicit risk contingencies). The main effects of neurocognitive performance and drug class on PG (defined as ≥3 DSM-IV PG symptoms) as well as their interactional effects were assessed via multiple linear regression. Two CGT indices of decision-making under risk demonstrated positive main effects on PG. Interaction effects indicated that the effects of decision-making under risk on PG were largely consistent across participant groups. Notably, a linear relationship between greater CGT Risk-Taking and PG symptoms was not observed among amphetamine users, whereas IGT performance was selectively and positively associated with PG in polysubstance users. Overall, results indicate that reward-based decision-making under risk may represent a risk factor for PG across substance users, with some variations in these relationships influenced by specific class of substance of abuse.

  5. The role of self-reported impulsivity and reward sensitivity versus neurocognitive measures of disinhibition and decision-making in the prediction of relapse in pathological gamblers.

    Science.gov (United States)

    Goudriaan, A E; Oosterlaan, J; De Beurs, E; Van Den Brink, W

    2008-01-01

    Disinhibition and decision-making skills play an important role in theories on the cause and outcome of addictive behaviors such as substance use disorders and pathological gambling. In recent studies, both disinhibition and disadvantageous decision-making strategies, as measured by neurocognitive tests, have been found to influence the course of substance use disorders. Research on factors affecting relapse in pathological gambling is scarce. This study investigated the effect of both self-reported impulsivity and reward sensitivity, and neurocognitively assessed disinhibition and decision-making under conflicting contingencies, on relapse in a group of 46 pathological gamblers. Logistic regression analysis indicated that longer duration of the disorder and neurocognitive indicators of disinhibition (Stop Signal Reaction Time) and decision-making (Card Playing Task) were significant predictors of relapse (explaining 53% of the variance in relapse), whereas self-reported impulsivity and reward sensitivity did not significantly predict relapse. Overall classification accuracy was 76%, with a positive classification accuracy of 76% and a negative classification accuracy of 75%. Duration of the disorder and neurocognitive measures of disinhibition and decision-making are powerful predictors of relapse in pathological gambling. The results suggest that endophenotypical neurocognitive characteristics are more promising in the prediction of relapse in pathological gambling than phenotypical personality characteristics. Neurocognitive predictors may be useful to guide treatment planning of follow-up contacts and booster sessions.

  6. Parental Rearing Behavior Prospectively Predicts Adolescents' Risky Decision-Making and Feedback-Related Electrical Brain Activity

    Science.gov (United States)

    Euser, Anja S.; Evans, Brittany E.; Greaves-Lord, Kirstin; Huizink, Anja C.; Franken, Ingmar H. A.

    2013-01-01

    The present study examined the role of parental rearing behavior in adolescents' risky decision-making and the brain's feedback processing mechanisms. Healthy adolescent participants ("n" = 110) completed the EMBU-C, a self-report questionnaire on perceived parental rearing behaviors between 2006 and 2008 (T1). Subsequently, after an…

  7. The Assessment of Burden of COPD (ABC) tool : a shared decision-making instrument that is predictive of healthcare costs

    NARCIS (Netherlands)

    Rutten-vanMolken, Maureen P. H. M.; Goossens, Lucas M A; Boland, Melinde R. S.; Donkers, Bas; Jonker, Marcel F.; Slok, Annerika H. M.; Salome, Philippe L.; van Schayck, Constant; In 't Veen, Johannes C C M; Stolk, Elly A.

    2017-01-01

    Background: The Assessment of Burden of COPD (ABC) tool is an instrument that supports shared decision making between patients and physicians. It includes a coloured balloon diagram to visualize a patient’s scores on a questionnaire about the experienced burden of COPD and several objective severity

  8. Comparison of Nomothetic versus Idiographic-Oriented Methods for Making Predictions about Distal Outcomes from Time Series Data

    Science.gov (United States)

    Castro-Schilo, Laura; Ferrer, Emilio

    2013-01-01

    We illustrate the idiographic/nomothetic debate by comparing 3 approaches to using daily self-report data on affect for predicting relationship quality and breakup. The 3 approaches included (a) the first day in the series of daily data; (b) the mean and variability of the daily series; and (c) parameters from dynamic factor analysis, a…

  9. AN EXTENDED REINFORCEMENT LEARNING MODEL OF BASAL GANGLIA TO UNDERSTAND THE CONTRIBUTIONS OF SEROTONIN AND DOPAMINE IN RISK-BASED DECISION MAKING, REWARD PREDICTION, AND PUNISHMENT LEARNING

    Directory of Open Access Journals (Sweden)

    Pragathi Priyadharsini Balasubramani

    2014-04-01

    Full Text Available Although empirical and neural studies show that serotonin (5HT plays many functional roles in the brain, prior computational models mostly focus on its role in behavioral inhibition. In this study, we present a model of risk based decision making in a modified Reinforcement Learning (RL-framework. The model depicts the roles of dopamine (DA and serotonin (5HT in Basal Ganglia (BG. In this model, the DA signal is represented by the temporal difference error (δ, while the 5HT signal is represented by a parameter (α that controls risk prediction error. This formulation that accommodates both 5HT and DA reconciles some of the diverse roles of 5HT particularly in connection with the BG system. We apply the model to different experimental paradigms used to study the role of 5HT: 1 Risk-sensitive decision making, where 5HT controls risk assessment, 2 Temporal reward prediction, where 5HT controls time-scale of reward prediction, and 3 Reward/Punishment sensitivity, in which the punishment prediction error depends on 5HT levels. Thus the proposed integrated RL model reconciles several existing theories of 5HT and DA in the BG.

  10. Dorsal Anterior Cingulate Cortices Differentially Lateralize Prediction Errors and Outcome Valence in a Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Alexander R. Weiss

    2018-05-01

    Full Text Available The dorsal anterior cingulate cortex (dACC is proposed to facilitate learning by signaling mismatches between the expected outcome of decisions and the actual outcomes in the form of prediction errors. The dACC is also proposed to discriminate outcome valence—whether a result has positive (either expected or desirable or negative (either unexpected or undesirable value. However, direct electrophysiological recordings from human dACC to validate these separate, but integrated, dimensions have not been previously performed. We hypothesized that local field potentials (LFPs would reveal changes in the dACC related to prediction error and valence and used the unique opportunity offered by deep brain stimulation (DBS surgery in the dACC of three human subjects to test this hypothesis. We used a cognitive task that involved the presentation of object pairs, a motor response, and audiovisual feedback to guide future object selection choices. The dACC displayed distinctly lateralized theta frequency (3–8 Hz event-related potential responses—the left hemisphere dACC signaled outcome valence and prediction errors while the right hemisphere dACC was involved in prediction formation. Multivariate analyses provided evidence that the human dACC response to decision outcomes reflects two spatiotemporally distinct early and late systems that are consistent with both our lateralized electrophysiological results and the involvement of the theta frequency oscillatory activity in dACC cognitive processing. Further findings suggested that dACC does not respond to other phases of action-outcome-feedback tasks such as the motor response which supports the notion that dACC primarily signals information that is crucial for behavioral monitoring and not for motor control.

  11. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    Science.gov (United States)

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO

  12. Prediction of week 4 virological response in hepatitis C for making decision on triple therapy: the Optim study.

    Directory of Open Access Journals (Sweden)

    Manuel Romero-Gómez

    Full Text Available Virological response to peginterferon + ribavirin (P+R at week 4 can predict sustained virological response (SVR. While patients with rapid virological response (RVR do not require triple therapy, patients with a decline <1 log10 IU/ml HCVRNA (D1L should have treatment discontinued due to low SVR rate.To develop a tool to predict first 4 weeks' viral response in patients with hepatitis C genotype 1&4 treated with P+R.In this prospective and multicenter study, HCV mono-infected (n=538 and HCV/HIV co-infected (n=186 patients were included. To develop and validate a prognostic tool to detect RVR and D1L, we segregated the patients as an estimation cohort (to construct the model and a validation cohort (to validate the model.D1L was reached in 509 (80.2% and RVR in 148 (22.5% patients. Multivariate analyses demonstrated that HIV co-infection, Forns' index, LVL, IL28B-CC and Genotype-1 were independently related to RVR as well as D1L. Diagnostic accuracy (AUROC for D1L was: 0.81 (95%CI: 0.76 ̶ 0.86 in the estimation cohort and 0.71 (95%CI: 0.62 ̶ 0.79 in the validation cohort; RVR prediction: AUROC 0.83 (95%CI: 0.78 ̶ 0.88 in the estimation cohort and 0.82 (95%CI: 0.76 ̶ 0.88 in the validation cohort. Cost-analysis of standard 48-week treatment indicated a saving of 30.3% if the prognostic tool is implemented.The combination of genetic (IL28B polymorphism and viral genotype together with viral load, HIV co-infection and fibrosis stage defined a tool able to predict RVR and D1L at week 4. Using this tool would be a cost-saving strategy compared to universal triple therapy for hepatitis C.

  13. Predictive Modeling of Physician-Patient Dynamics That Influence Sleep Medication Prescriptions and Clinical Decision-Making

    Science.gov (United States)

    Beam, Andrew L.; Kartoun, Uri; Pai, Jennifer K.; Chatterjee, Arnaub K.; Fitzgerald, Timothy P.; Shaw, Stanley Y.; Kohane, Isaac S.

    2017-02-01

    Insomnia remains under-diagnosed and poorly treated despite its high economic and social costs. Though previous work has examined how patient characteristics affect sleep medication prescriptions, the role of physician characteristics that influence this clinical decision remains unclear. We sought to understand patient and physician factors that influence sleep medication prescribing patterns by analyzing Electronic Medical Records (EMRs) including the narrative clinical notes as well as codified data. Zolpidem and trazodone were the most widely prescribed initial sleep medication in a cohort of 1,105 patients. Some providers showed a historical preference for one medication, which was highly predictive of their future prescribing behavior. Using a predictive model (AUC = 0.77), physician preference largely determined which medication a patient received (OR = 3.13 p = 3 × 10-37). In addition to the dominant effect of empirically determined physician preference, discussion of depression in a patient’s note was found to have a statistically significant association with receiving a prescription for trazodone (OR = 1.38, p = 0.04). EMR data can yield insights into physician prescribing behavior based on real-world physician-patient interactions.

  14. Toward a Psychology of Surrogate Decision Making.

    Science.gov (United States)

    Tunney, Richard J; Ziegler, Fenja V

    2015-11-01

    In everyday life, many of the decisions that we make are made on behalf of other people. A growing body of research suggests that we often, but not always, make different decisions on behalf of other people than the other person would choose. This is problematic in the practical case of legally designated surrogate decision makers, who may not meet the substituted judgment standard. Here, we review evidence from studies of surrogate decision making and examine the extent to which surrogate decision making accurately predicts the recipient's wishes, or if it is an incomplete or distorted application of the surrogate's own decision-making processes. We find no existing domain-general model of surrogate decision making. We propose a framework by which surrogate decision making can be assessed and a novel domain-general theory as a unifying explanatory concept for surrogate decisions. © The Author(s) 2015.

  15. Prediction of Risk Behaviors in HIV-infected Patients Based on Family Functioning: The Mediating Roles of Lifestyle and Risky Decision Making

    Directory of Open Access Journals (Sweden)

    Fariba Ebrahim Babaei

    2017-09-01

    Full Text Available Background and Objective: Risk behaviors are more common in the HIV-positive patients than that in the general population. These behaviors are affected by various factors, such as biological, familial, and social determinants, peer group, media, and lifestyle. Low family functioning is one of the important factors predicting risk behaviors. Regarding this, the present study aimed to investigate the role of family functioning in predicting risk behaviors in the HIV-infected patients based on the mediating roles of risky decision making and lifestyle. Materials and Methods: This descriptive correlational study was conducted on 147 HIV-positive patients selected through convenience sampling technique. The data were collected using the health promoting lifestyle profile-2 (HPLP-2, family adaptability and cohesion scale IV (FACES-IV, balloon analogue risk task (BART, and risk behavior assessment in social situation. The data were analyzed using structural equation modeling method in LISREL 8.8 software. Results: According to the results, there was an indirect relationship between family functioning and risk behaviors. Furthermore, family functioning both directly and indirectly affected the risk behaviors through two mediators of lifestyle and risky decision making. Conclusion: As the findings indicated, family functioning directly contributed to risk behaviors. Moreover, this variable indirectly affected risk behaviors through the mediating roles of risky decision making and lifestyle. Consequently, the future studies should focus more deeply on family functioning role in the risk behaviors of the HIV-infected patients.

  16. The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making.

    Science.gov (United States)

    Hamker, Fred H

    2008-07-15

    Feature inheritance provides evidence that properties of an invisible target stimulus can be attached to a following mask. We apply a systemslevel model of attention and decision making to explore the influence of memory and feedback connections in feature inheritance. We find that the presence of feedback loops alone is sufficient to account for feature inheritance. Although our simulations do not cover all experimental variations and focus only on the general principle, our result appears of specific interest since the model was designed for a completely different purpose than to explain feature inheritance. We suggest that feedback is an important property in visual perception and provide a description of its mechanism and its role in perception.

  17. Making oxidation potentials predictable: Coordination of additives applied to the electronic fine tuning of an iron(II) complex

    KAUST Repository

    Haslinger, Stefan

    2014-11-03

    This work examines the impact of axially coordinating additives on the electronic structure of a bioinspired octahedral low-spin iron(II) N-heterocyclic carbene (Fe-NHC) complex. Bearing two labile trans-acetonitrile ligands, the Fe-NHC complex, which is also an excellent oxidation catalyst, is prone to axial ligand exchange. Phosphine- and pyridine-based additives are used for substitution of the acetonitrile ligands. On the basis of the resulting defined complexes, predictability of the oxidation potentials is demonstrated, based on a correlation between cyclic voltammetry experiments and density functional theory calculated molecular orbital energies. Fundamental insights into changes of the electronic properties upon axial ligand exchange and the impact on related attributes will finally lead to target-oriented manipulation of the electronic properties and consequently to the effective tuning of the reactivity of bioinspired systems.

  18. Making oxidation potentials predictable: Coordination of additives applied to the electronic fine tuning of an iron(II) complex

    KAUST Repository

    Haslinger, Stefan; Kü ck, Jens W.; Hahn, Eva M.; Cokoja, Mirza; Pö thig, Alexander; Basset, Jean-Marie; Kü hn, Fritz

    2014-01-01

    This work examines the impact of axially coordinating additives on the electronic structure of a bioinspired octahedral low-spin iron(II) N-heterocyclic carbene (Fe-NHC) complex. Bearing two labile trans-acetonitrile ligands, the Fe-NHC complex, which is also an excellent oxidation catalyst, is prone to axial ligand exchange. Phosphine- and pyridine-based additives are used for substitution of the acetonitrile ligands. On the basis of the resulting defined complexes, predictability of the oxidation potentials is demonstrated, based on a correlation between cyclic voltammetry experiments and density functional theory calculated molecular orbital energies. Fundamental insights into changes of the electronic properties upon axial ligand exchange and the impact on related attributes will finally lead to target-oriented manipulation of the electronic properties and consequently to the effective tuning of the reactivity of bioinspired systems.

  19. Usefulness of the rivermead postconcussion symptoms questionnaire and the trail-making test for outcome prediction in patients with mild traumatic brain injury.

    Science.gov (United States)

    de Guise, Elaine; Bélanger, Sara; Tinawi, Simon; Anderson, Kirsten; LeBlanc, Joanne; Lamoureux, Julie; Audrit, Hélène; Feyz, Mitra

    2016-01-01

    The aim of the study was to determine if the Rivermead Postconcussion Symptoms Questionnaire (RPQ) is a better tool for outcome prediction than an objective neuropsychological assessment following mild traumatic brain injury (mTBI). The study included 47 patients with mTBI referred to an outpatient rehabilitation clinic. The RPQ and a brief neuropsychological battery were performed in the first few days following the trauma. The outcome measure used was the Mayo-Portland Adaptability Inventory-4 (MPAI-4) which was completed within the first 3 months. The only variable associated with results on the MPAI-4 was the RPQ score (p < .001). The predictive outcome model including age, education, and the results of the Trail-Making Test-Parts A and B (TMT) had a pseudo-R(2) of .02. When the RPQ score was added, the pseudo-R(2) climbed to .19. This model indicates that the usefulness of the RPQ score and the TMT in predicting moderate-to-severe limitations, while controlling for confounders, is substantial as suggested by a significant increase in the model chi-square value, delta (1df) = 6.517, p < .001. The RPQ and the TMT provide clinicians with a brief and reliable tool for predicting outcome functioning and can help target the need for further intervention and rehabilitation following mTBI.

  20. Social Anxiety, Acute Social Stress, and Reward Parameters Interact to Predict Risky Decision-Making among Adolescents

    Science.gov (United States)

    Richards, Jessica M.; Patel, Nilam; Daniele, Teresa; MacPherson, Laura; Lejuez, C.W.; Ernst, Monique

    2014-01-01

    Risk-taking behavior increases during adolescence, leading to potentially disastrous consequences. Social anxiety emerges in adolescence and may compound risk-taking propensity, particularly during stress and when reward potential is high. However, the manner in which social anxiety, stress, and reward parameters interact to impact adolescent risk-taking is unclear. To clarify this question, a community sample of 35 adolescents (15 to 18 yo), characterized as having high or low social anxiety, participated in a 2-day study, during each of which they were exposed to either a social stress or a control condition, while performing a risky decision-making task. The task manipulated, orthogonally, reward magnitude and probability across trials. Three findings emerged. First, reward magnitude had a greater impact on the rate of risky decisions in high social anxiety (HSA) than low social anxiety (LSA) adolescents. Second, reaction times (RTs) were similar during the social stress and the control conditions for the HSA group, whereas the LSA group’s RTs differed between conditions. Third, HSA adolescents showed the longest RTs on the most negative trials. These findings suggest that risk-taking in adolescents is modulated by context and reward parameters differentially as a function of social anxiety. PMID:25465884

  1. Social anxiety, acute social stress, and reward parameters interact to predict risky decision-making among adolescents.

    Science.gov (United States)

    Richards, Jessica M; Patel, Nilam; Daniele-Zegarelli, Teresa; MacPherson, Laura; Lejuez, C W; Ernst, Monique

    2015-01-01

    Risk-taking behavior increases during adolescence, leading to potentially disastrous consequences. Social anxiety emerges in adolescence and may compound risk-taking propensity, particularly during stress and when reward potential is high. However, the manner in which social anxiety, stress, and reward parameters interact to impact adolescent risk-taking is unclear. To clarify this question, a community sample of 35 adolescents (15-18yo), characterized as having high or low social anxiety, participated in a study over two separate days, during each of which they were exposed to either a social stress or a control condition, while performing a risky decision-making task. The task manipulated, orthogonally, reward magnitude and probability across trials. Three findings emerged. First, reward magnitude had a greater impact on the rate of risky decisions in high social anxiety (HSA) than low social anxiety (LSA) adolescents. Second, reaction times (RTs) were similar during the social stress and the control conditions for the HSA group, whereas the LSA group's RTs differed between conditions. Third, HSA adolescents showed the longest RTs on the most negative trials. These findings suggest that risk-taking in adolescents is modulated by context and reward parameters differentially as a function of social anxiety. Published by Elsevier Ltd.

  2. Emotion and decision-making under uncertainty: Physiological arousal predicts increased gambling during ambiguity but not risk.

    Science.gov (United States)

    FeldmanHall, Oriel; Glimcher, Paul; Baker, Augustus L; Phelps, Elizabeth A

    2016-10-01

    Uncertainty, which is ubiquitous in decision-making, can be fractionated into known probabilities (risk) and unknown probabilities (ambiguity). Although research has illustrated that individuals more often avoid decisions associated with ambiguity compared to risk, it remains unclear why ambiguity is perceived as more aversive. Here we examine the role of arousal in shaping the representation of value and subsequent choice under risky and ambiguous decisions. To investigate the relationship between arousal and decisions of uncertainty, we measure skin conductance response-a quantifiable measure reflecting sympathetic nervous system arousal-during choices to gamble under risk and ambiguity. To quantify the discrete influences of risk and ambiguity sensitivity and the subjective value of each option under consideration, we model fluctuating uncertainty, as well as the amount of money that can be gained by taking the gamble. Results reveal that although arousal tracks the subjective value of a lottery regardless of uncertainty type, arousal differentially contributes to the computation of value-that is, choice-depending on whether the uncertainty is risky or ambiguous: Enhanced arousal adaptively decreases risk-taking only when the lottery is highly risky but increases risk-taking when the probability of winning is ambiguous (even after controlling for subjective value). Together, this suggests that the role of arousal during decisions of uncertainty is modulatory and highly dependent on the context in which the decision is framed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Assessing uncertainties in flood forecasts for decision making: prototype of an operational flood management system integrating ensemble predictions

    Directory of Open Access Journals (Sweden)

    J. Dietrich

    2009-08-01

    Full Text Available Ensemble forecasts aim at framing the uncertainties of the potential future development of the hydro-meteorological situation. A probabilistic evaluation can be used to communicate forecast uncertainty to decision makers. Here an operational system for ensemble based flood forecasting is presented, which combines forecasts from the European COSMO-LEPS, SRNWP-PEPS and COSMO-DE prediction systems. A multi-model lagged average super-ensemble is generated by recombining members from different runs of these meteorological forecast systems. A subset of the super-ensemble is selected based on a priori model weights, which are obtained from ensemble calibration. Flood forecasts are simulated by the conceptual rainfall-runoff-model ArcEGMO. Parameter uncertainty of the model is represented by a parameter ensemble, which is a priori generated from a comprehensive uncertainty analysis during model calibration. The use of a computationally efficient hydrological model within a flood management system allows us to compute the hydro-meteorological model chain for all members of the sub-ensemble. The model chain is not re-computed before new ensemble forecasts are available, but the probabilistic assessment of the output is updated when new information from deterministic short range forecasts or from assimilation of measured data becomes available. For hydraulic modelling, with the desired result of a probabilistic inundation map with high spatial resolution, a replacement model can help to overcome computational limitations. A prototype of the developed framework has been applied for a case study in the Mulde river basin. However these techniques, in particular the probabilistic assessment and the derivation of decision rules are still in their infancy. Further research is necessary and promising.

  4. Deterministic prediction of surface wind speed variations

    Directory of Open Access Journals (Sweden)

    G. V. Drisya

    2014-11-01

    Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  5. How to Make Correct Predictions in False Belief Tasks without Attributing False Beliefs: An Analysis of Alternative Inferences and How to Avoid Them

    Directory of Open Access Journals (Sweden)

    Ricardo Augusto Perera

    2018-04-01

    Full Text Available The use of new paradigms of false belief tasks (FBT allowed to reduce the age of children who pass the test from the previous 4 years in the standard version to only 15 months or even a striking 6 months in the nonverbal modification. These results are often taken as evidence that infants already possess an—at least implicit—theory of mind (ToM. We criticize this inferential leap on the grounds that inferring a ToM from the predictive success on a false belief task requires to assume as premise that a belief reasoning is a necessary condition for correct action prediction. It is argued that the FBT does not satisfactorily constrain the predictive means, leaving room for the use of belief-independent inferences (that can rely on the attribution of non-representational mental states or the consideration of behavioral patterns that dispense any reference to other minds. These heuristics, when applied to the FBT, can achieve the same predictive success of a belief-based inference because information provided by the test stimulus allows the recognition of particular situations that can be subsumed by their ‘laws’. Instead of solving this issue by designing a single experimentum crucis that would render unfeasible the use of non-representational inferences, we suggest the application of a set of tests in which, although individually they can support inferences dissociated from a ToM, only an inference that makes use of false beliefs is able to correctly predict all the outcomes.

  6. Personal resilience resources predict post-stem cell transplant cancer survivors' psychological outcomes through reductions in depressive symptoms and meaning-making.

    Science.gov (United States)

    Campo, Rebecca A; Wu, Lisa M; Austin, Jane; Valdimarsdottir, Heiddis; Rini, Christine

    2017-01-01

    This longitudinal study examined whether post-transplant cancer survivors (N = 254, 9 months to 3 years after stem cell transplant treatment) with greater personal resilience resources demonstrated better psychological outcomes and whether this could be attributed to reductions in depressive symptoms and/or four meaning-making processes (searching for and finding reasons for one's illness; searching for and finding benefit from illness). Hierarchical linear regression analyses examined associations of survivors' baseline personal resilience resources (composite variable of self-esteem, mastery, and optimism), which occurred an average of 1.7 years after transplant, and 4-month changes in psychological outcomes highly relevant to recovering from this difficult and potentially traumatic treatment: post-traumatic stress disorder (PTSD) symptoms and purpose in life. Boot-strapped analyses tested mediation. Greater personal resilience resources predicted decreases in PTSD stress symptoms (b = -0.07, p = 0.005), mediated by reductions in depressive symptoms (b = -0.01, 95% CI: -0.027, -0.003) and in searching for a reason for one's illness (b = -0.01, 95% CI: -0.034, -0.0003). In addition, greater resilience resources predicted increases in purpose in life (b = 0.10, p meaning-making (searching for a reason for one's illness) was also important for reducing PTSD symptoms.

  7. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  8. Genetic Counselling for Predictive Testing in Huntington's Disease in One Centre since 1993. Gender-Specific Aspects of Decision-Making.

    Science.gov (United States)

    Arning, Larissa; Witt, Constantin N; Epplen, Jörg T; Stemmler, Susanne

    2015-01-01

    are discussed longitudinally and in the context of the experience in other centres. We present new gender-specific aspects of decision-making for predictive HD tests.

  9. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  10. Systemic inflammatory response syndrome and model for end-stage liver disease score accurately predict the in-hospital mortality of black African patients with decompensated cirrhosis at initial hospitalization: a retrospective cohort study

    Directory of Open Access Journals (Sweden)

    Mahassadi AK

    2018-04-01

    Full Text Available Alassan Kouamé Mahassadi,1 Justine Laure Konang Nguieguia,1 Henriette Ya Kissi,1 Anthony Afum-Adjei Awuah,2 Aboubacar Demba Bangoura,1 Stanislas Adjeka Doffou,1 Alain Koffi Attia1 1Medicine and Hepatogastroenterology Unit, Centre Hospitalier et Universitaire de Yopougon, Abidjan, Côte d’Ivoire; 2Kumasi Centre for Collaborative Research in Tropical Medicine, Kumasi, Ghana Background: Systemic inflammatory response syndrome (SIRS and model for end-stage liver disease (MELD predict short-term mortality in patients with cirrhosis. Prediction of mortality at initial hospitalization is unknown in black African patients with decompensated cirrhosis.Aim: This study aimed to look at the role of MELD score and SIRS as the predictors of morbidity and mortality at initial hospitalization.Patients and methods: In this retrospective cohort study, we enrolled 159 patients with cirrhosis (median age: 49 years, 70.4% males. The role of Child–Pugh–Turcotte (CPT score, MELD score, and SIRS on mortality was determined by the Kaplan–Meier method, and the prognosis factors were assessed with Cox regression model.Results: At initial hospitalization, 74.2%, 20.1%, and 37.7% of the patients with cirrhosis showed the presence of ascites, hepatorenal syndrome, and esophageal varices, respectively. During the in-hospital follow-up, 40 (25.2% patients died. The overall incidence of mortality was found to be 3.1 [95% confidence interval (CI: 2.2–4.1] per 100 person-days. Survival probabilities were found to be high in case of patients who were SIRS negative (log-rank test= 4.51, p=0.03 and in case of patients with MELD score ≤16 (log-rank test=7.26, p=0.01 compared to the patients who were SIRS positive and those with MELD score >16. Only SIRS (hazard ratio (HR=3.02, [95% CI: 1.4–7.4], p=0.01 and MELD score >16 (HR=2.2, [95% CI: 1.1–4.3], p=0.02 were independent predictors of mortality in multivariate analysis except CPT, which was not relevant in our study

  11. How could (should) we make contact between string/M-theory and our four-dimensional world, and associated LHC predictions?

    Science.gov (United States)

    Kane, Gordon

    2015-12-01

    String/M-theory is an exciting framework within which we try to understand our universe and its properties. Compactified string/M-theories address and offer solutions to almost every important question and issue in particle physics and particle cosmology. But earlier goals of finding a top-down “vacuum selection” principle and deriving the 4D theory have not yet been realized. Does that mean we should stop trying, as nearly all string theorists have? Or can we proceed in the historical way to make a few generic, robust assumptions not closely related to observables, and follow where they lead to testable predictions and explanations? Making only very generic assumptions is a significant issue. I discuss how to try to proceed with this approach, particularly in M-theory compactified on a 7D manifold of G2 holonomy. One goal is to understand our universe as a string/M-theory vacuum for its own sake, in the long tradition of trying to understand our world, and what that implies. In addition, understanding our vacuum may be a prelude to understanding its connection to the multiverse.

  12. Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

    OpenAIRE

    Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos

    2016-01-01

    At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to ma...

  13. Does the Spectrum model accurately predict trends in adult mortality? Evaluation of model estimates using empirical data from a rural HIV community cohort study in north-western Tanzania

    Directory of Open Access Journals (Sweden)

    Denna Michael

    2014-01-01

    Full Text Available Introduction: Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods: Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15, were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005 and the first 5 years afterwards (2005–2009. Results: Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52 in the period 1994–1999, 11.03 (95% CI: 8.84–13.77 in 2000–2004, and 6.22 (95% CI; 4.75–8.15 in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55% in 1994 to 22% (95% CI: 17–29% in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion: From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear.

  14. Nonsurgical giant cell tumour of the tendon sheath or of the diffuse type: Are MRI or 18F-FDG PET/CT able to provide an accurate prediction of long-term outcome?

    International Nuclear Information System (INIS)

    Dercle, Laurent; Chisin, Roland; Ammari, Samy; Gillebert, Quentin; Ouali, Monia; Jaudet, Cyril; Dierickx, Lawrence; Zerdoud, Slimane; Courbon, Frederic; Delord, Jean-Pierre; Schlumberger, Martin

    2015-01-01

    To investigate whether MRI (RECIST 1.1, WHO criteria and the volumetric approach) or 18 F-FDG PET/CT (PERCIST 1.0) are able to predict long-term outcome in nonsurgical patients with giant cell tumour of the tendon sheath or of the diffuse type (GCT-TS/DT). Fifteen ''nonsurgical'' patients with a histological diagnosis of GCT-TS/DT were divided into two groups: symptomatic patients receiving targeted therapy and asymptomatic untreated patients. All 15 patients were evaluated by MRI of whom 10 were treated, and a subgroup of 7 patients were evaluated by PET/CT of whom 4 were treated. Early evolution was assessed according to MRI and PET/CT scans at baseline and during follow-up. Cohen's kappa coefficient was used to evaluate the degree of agreement between PERCIST 1.0, RECIST 1.1, WHO criteria, volumetric approaches and the reference standard (long-term outcome, delay 505 ± 457 days). The response rate in symptomatic patients with GCT-TS/DT receiving targeted therapy was also assessed in a larger population that included additional patients obtained from a review of the literature. The kappa coefficients for agreement between RECIST/WHO/volumetric criteria and outcome (15 patients) were respectively: 0.35 (p = 0.06), 0.26 (p = 0.17) and 0.26 (p = 0.17). In the PET/CT subgroup (7 patients), PERCIST was in perfect agreement with the late symptomatic evolution (kappa = 1, p 18 F-FDG PET/CT with PERCIST is a promising approach to the prediction of the long-term outcome in GCT-TS/DT and may avoid unnecessary treatments, toxicity and costs. On MRI, WHO and volumetric approaches are not more effective than RECIST using the current thresholds. (orig.)

  15. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  16. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  17. Can brain responses to movie trailers predict success?

    OpenAIRE

    Boksem, Maarten

    2015-01-01

    textabstractDecades of research have shown that much of our mental processing occurs at the subconscious level, including the decisions we make as consumers. These subconscious processes explain why we so often fail to accurately predict our own future choices. Often what we think we want has little or no bearing on the choices we actually make. Now a new study provides the first evidence that brain measures can provide significant added value to models for predicting consumer choice.

  18. The influence of land-use and land-management on Soil Organic Carbon concentrations: Limitations of making predictions using only soil order data

    Science.gov (United States)

    Bell, M. J.; Worrall, F.

    2009-04-01

    In light of recent concern over the extent of global warming and the role of soil carbon as a potential store of atmospheric carbon, there is increasing demand for regions to estimate their current soil organic carbon (SOC) stocks with the greatest possible accuracy. Several previous attempts at calculating SOC baselines at global, national or regional scale have used mean values for soil orders and multiplied these values by the mapped areas of the soils they represent. Other methods have approached the task from a land cover point of view, making estimates using only land-use, or soil order/land-use combinations and others have included variables such as altitude, climate and soil texture. This study aimed to assess the major controls on SOC concentrations (%SOC) at the National Trust Wallington estate in Northumberland, NE England (area = 55km2) where an extensive soil sampling campaign was used to test what level of accuracy could be achieved in modelling the %SOC values on the Estate. Mapped %SOC values were compared to the values predicted from The National Soils Resources Institute (NSRI) representative soil profile data for major soil group, soil series and land-use corrected soil series values, as well as land-use/major soil group combinations from the Countryside Survey database. The results of this study can be summarised as follows: When only soil series or land-use were used as predictors only 48% and 44% of the variation in the dataset were explained. When soil series/land-use combinations were used explanatory power increased to 57% both altitude and soil pH are major controls on %SOC and including these variables gave an improvement to 59% A further improvement from 59% to 66% in the ability to predict %SOC levels at point locations when farm tenancy was included indicates that differences in land-management practices between farm tenancies explained more of the variation than either soil series or land-use in %SOC. Further work will involve a

  19. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  20. An ecologically based model of alcohol-consumption decision making: evidence for the discriminative and predictive role of contextual reward and punishment information.

    Science.gov (United States)

    Bogg, Tim; Finn, Peter R

    2009-05-01

    Using insights from Ecological Systems Theory and Reinforcement Sensitivity Theory, the current study assessed the utility of a series of hypothetical role-based alcohol-consumption scenarios that varied in their presentation of rewarding and punishing information. The scenarios, along with measures of impulsive sensation seeking and a self-report of weekly alcohol consumption, were administered to a sample of alcohol-dependent and non-alcohol-dependent college-age individuals (N = 170). The results showed scenario attendance decisions were largely unaffected by alcohol-dependence status and variations in contextual reward and punishment information. In contrast to the attendance findings, the results for the alcohol-consumption decisions showed alcohol-dependent individuals reported a greater frequency of deciding to drink, as well as indicating greater alcohol consumption in the contexts of complementary rewarding or nonpunishing information. Regression results provided evidence for the criterion-related validity of scenario outcomes in an account of diagnostic alcohol problems. The results are discussed in terms of the conceptual and predictive gains associated with an assessment approach to alcohol-consumption decision making that combines situational information organized and balanced through the frameworks of Ecological Systems Theory and Reinforcement Sensitivity Theory.

  1. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  2. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  3. Achieving target voriconazole concentrations more accurately in children and adolescents.

    Science.gov (United States)

    Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, -17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was -0.7% (interquartile range, -7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, -1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was -6.7% (interquartile range, -18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, -13.1 to 18%; P = 0.32) and a dose bias of -3.5% (interquartile range, -18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.). Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  4. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  5. Helplessness/hopelessness, minimization and optimism predict survival in women with invasive ovarian cancer: a role for targeted support during initial treatment decision-making?

    Science.gov (United States)

    Price, Melanie A; Butow, Phyllis N; Bell, Melanie L; deFazio, Anna; Friedlander, Michael; Fardell, Joanna E; Protani, Melinda M; Webb, Penelope M

    2016-06-01

    Women with advanced ovarian cancer generally have a poor prognosis but there is significant variability in survival despite similar disease characteristics and treatment regimens. The aim of this study was to determine whether psychosocial factors predict survival in women with ovarian cancer, controlling for potential confounders. The sample comprised 798 women with invasive ovarian cancer recruited into the Australian Ovarian Cancer Study and a subsequent quality of life study. Validated measures of depression, optimism, minimization, helplessness/hopelessness, and social support were completed 3-6 monthly for up to 2 years. Four hundred nineteen women (52.5 %) died over the follow-up period. Associations between time-varying psychosocial variables and survival were tested using adjusted Cox proportional hazard models. There was a significant interaction of psychosocial variables measured prior to first progression and overall survival, with higher optimism (adjusted hazard ratio per 1 standard deviation (HR) = 0.80, 95 % confidence interval (CI) 0.65-0.97), higher minimization (HR = 0.79, CI 0.66-0.94), and lower helplessness/hopelessness (HR = 1.40, CI 1.15-1.71) associated with longer survival. After disease progression, these variables were not associated with survival (optimism HR = 1.10, CI 0.95-1.27; minimization HR = 1.12, CI 0.95-1.31; and helplessness/hopelessness HR = 0.86, CI 0.74-1.00). Depression and social support were not associated with survival. In women with invasive ovarian cancer, psychosocial variables prior to disease progression appear to impact on overall survival, suggesting a preventive rather than modifying role. Addressing psychosocial responses to cancer and their potential impact on treatment decision-making early in the disease trajectory may benefit survival and quality of life.

  6. WGS accurately predicts antimicrobial resistance in Escherichia coli

    Science.gov (United States)

    Objectives: To determine the effectiveness of whole-genome sequencing (WGS) in identifying resistance genotypes of multidrug-resistant Escherichia coli (E. coli) and whether these correlate with observed phenotypes. Methods: Seventy-six E. coli strains were isolated from farm cattle and measured f...

  7. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    NARCIS (Netherlands)

    Westhall, Erik; Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias; Saxena, Manoj; Miller, Jennene; Inskip, Deborah; Macken, Lewis; Finfer, Simon; Eatough, Noel; Hammond, Naomi; Bass, Frances; Yarad, Elizabeth; O'Connor, Anne; Bird, Simon; Jewell, Timothy; Davies, Gareth; Ng, Karl; Coward, Sharon; Stewart, Antony; Micallef, Sharon; Parker, Sharyn; Cortado, Dennis; Gould, Ann; Harward, Meg; Thompson, Kelly; Glass, Parisa; Myburgh, John; Smid, Ondrej; Belholavek, Jan; Juffermans, Nicole P.; Boerma, EC

    2016-01-01

    To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. In this cohort study, 4 EEG specialists, blinded to outcome,

  8. Accurate prediction of secondary metabolite gene clusters in filamentous fungi

    DEFF Research Database (Denmark)

    Andersen, Mikael Rørdam; Nielsen, Jakob Blæsbjerg; Klitgaard, Andreas

    2013-01-01

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify...... used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom....

  9. Ethics and epistemology of accurate prediction in clinical research.

    Science.gov (United States)

    Hey, Spencer Phillips

    2015-07-01

    All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with the principles of ethical research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.

    2014-01-01

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  11. Adult age differences in predicting memory performance: the effects of normative information and task experience.

    Science.gov (United States)

    McDonald-Miszczak, L; Hunter, M A; Hultsch, D F

    1994-03-01

    Two experiments addressed the effects of task information and experience on younger and older adults' ability to predict their memory for words. The first study examined the effects of normative task information on subjects' predictions for 30-word lists across three trials. The second study looked at the effects of making predictions and recalling either an easy (15) or a difficult (45) word list prior to making predictions and recalling a moderately difficult (30) word list. The results from both studies showed that task information and experience affected subjects' predictions and that elderly adults predicted their performance more accurately than younger adults.

  12. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  13. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  14. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  15. Accurate Cross Sections for Microanalysis

    OpenAIRE

    Rez, Peter

    2002-01-01

    To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a...

  16. When friends make you blue: the role of friendship contingent self-esteem in predicting self-esteem and depressive symptoms.

    Science.gov (United States)

    Cambron, M Janelle; Acitelli, Linda K; Steinberg, Lynne

    2010-03-01

    This research examines the role of friendship contingent self-esteem (FCSE), or self-esteem that is dependent on the quality of one's friendships, in predicting depressive symptoms. In Study 1, the authors developed a measure of FCSE. Both FCSE and others' approval correlated with self-esteem and depressive symptoms, but when entered simultaneously in a regression equation, only FCSE significantly predicted self-esteem and depressive symptoms. Study 2 showed that dependency and close friendship competence predicted depressive symptoms only for those high in FCSE. In Study 3, a diary study, FCSE predicted self-esteem instability. Self-esteem instability, in turn, predicted depressive symptoms. Furthermore, a three-way interaction of rumination, FCSE, and the valence of the event predicted momentary self-esteem. Findings are discussed with regard to the importance of considering FCSE when investigating interpersonal risk for depression.

  17. Heuristic decision making.

    Science.gov (United States)

    Gigerenzer, Gerd; Gaissmaier, Wolfgang

    2011-01-01

    As reflected in the amount of controversy, few areas in psychology have undergone such dramatic conceptual changes in the past decade as the emerging science of heuristics. Heuristics are efficient cognitive processes, conscious or unconscious, that ignore part of the information. Because using heuristics saves effort, the classical view has been that heuristic decisions imply greater errors than do "rational" decisions as defined by logic or statistical models. However, for many decisions, the assumptions of rational models are not met, and it is an empirical rather than an a priori issue how well cognitive heuristics function in an uncertain world. To answer both the descriptive question ("Which heuristics do people use in which situations?") and the prescriptive question ("When should people rely on a given heuristic rather than a complex strategy to make better judgments?"), formal models are indispensable. We review research that tests formal models of heuristic inference, including in business organizations, health care, and legal institutions. This research indicates that (a) individuals and organizations often rely on simple heuristics in an adaptive way, and (b) ignoring part of the information can lead to more accurate judgments than weighting and adding all information, for instance for low predictability and small samples. The big future challenge is to develop a systematic theory of the building blocks of heuristics as well as the core capacities and environmental structures these exploit.

  18. Accuracy of ‘My Gut Feeling:’ Comparing System 1 to System 2 Decision-Making for Acuity Prediction, Disposition and Diagnosis in an Academic Emergency Department

    Directory of Open Access Journals (Sweden)

    Daniel Cabrera

    2015-10-01

    Full Text Available Introduction: Current cognitive sciences describe decision-making using the dual-process theory, where a System 1 is intuitive and a System 2 decision is hypothetico-deductive. We aim to compare the performance of these systems in determining patient acuity, disposition and diagnosis. Methods: Prospective observational study of emergency physicians assessing patients in the emergency department of an academic center. Physicians were provided the patient’s chief complaint and vital signs and allowed to observe the patient briefly. They were then asked to predict acuity, final disposition (home, intensive care unit (ICU, non-ICU bed and diagnosis. A patient was classified as sick by the investigators using previously published objective criteria. Results: We obtained 662 observations from 289 patients. For acuity, the observers had a sensitivity of 73.9% (95% CI [67.7-79.5%], specificity 83.3% (95% CI [79.5-86.7%], positive predictive value 70.3% (95% CI [64.1-75.9%] and negative predictive value 85.7% (95% CI [82.0-88.9%]. For final disposition, the observers made a correct prediction in 80.8% (95% CI [76.1-85.0%] of the cases. For ICU admission, emergency physicians had a sensitivity of 33.9% (95% CI [22.1-47.4%] and a specificity of 96.9% (95% CI [94.0-98.7%]. The correct diagnosis was made 54% of the time with the limited data available. Conclusion: System 1 decision-making based on limited information had a sensitivity close to 80% for acuity and disposition prediction, but the performance was lower for predicting ICU admission and diagnosis. System 1 decision-making appears insufficient for final decisions in these domains but likely provides a cognitive framework for System 2 decision-making.

  19. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  20. Keep it Accurate and Diverse

    DEFF Research Database (Denmark)

    Ali Bagheri, Mohammad; Gao, Qigang; Guerrero, Sergio Escalera

    2015-01-01

    the performance of an ensemble of action learning techniques, each performing the recognition task from a different per- spective. The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple...... to improve the recognition perfor- mance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use of diversity of base learners trained on different sources of information. The recognition results of the individual clas- sifiers are compared with those...... obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology....

  1. The role of self-reported impulsivity and reward sensitivity versus neurocognitive measures of disinhibition and decision making in the prediction of relapse in pathological gamblers

    NARCIS (Netherlands)

    Goudriaan, A.E.; Oosterlaan, J.; de Beurs, E.; van den Brink, W.

    2008-01-01

    Background: Disinhibition and decision-making skills play an important role in theories on the cause and outcome of addictive behaviors such as substance use disorders and pathological gambling. In recent studies, both disinhibition and disadvantageous decision-making strategies, as measured by

  2. The role of self-reported impulsivity and reward sensitivity versus neurocognitive measures of disinhibition and decision-making in the prediction of relapse in pathological gamblers

    NARCIS (Netherlands)

    Goudriaan, A. E.; Oosterlaan, J.; de Beurs, E.; van den Brink, W.

    2008-01-01

    BACKGROUND: Disinhibition and decision-making skills play an important role in theories on the cause and outcome of addictive behaviors such as substance use disorders and pathological gambling. In recent studies, both disinhibition and disadvantageous decision-making strategies, as measured by

  3. Early Prediction of Student Dropout and Performance in MOOCSs Using Higher Granularity Temporal Information

    Science.gov (United States)

    Ye, Cheng; Biswas, Gautam

    2014-01-01

    Our project is motivated by the early dropout and low completion rate problem in MOOCs. We have extended traditional features for MOOC analysis with richer and higher granularity information to make more accurate predictions of dropout and performance. The results show that finer-grained temporal information increases the predictive power in the…

  4. Unsupervised energy prediction in a smart grid context using reinforcement cross-buildings transfer learning

    NARCIS (Netherlands)

    Mocanu, E.; Nguyen, P.H.; Kling, W.L.; Gibescu, M.

    2016-01-01

    In a future Smart Grid context, increasing challenges in managing the stochastic local energy supply and demand are expected. This increased the need of more accurate energy prediction methods in order to support further complex decision-making processes. Although many methods aiming to predict the

  5. Prediction degradation trend of nuclear equipment based on GM (1, 1)-Markov chain

    International Nuclear Information System (INIS)

    Zhang Liming; Zhao Xinwen; Cai Qi; Wu Guangjiang

    2010-01-01

    The degradation trend prediction results are important references for nuclear equipment in-service inspection and maintenance plan. But it is difficult to predict the nuclear equipment degradation trend accurately by the traditional statistical probability due to the small samples, lack of degradation data and the wavy degradation locus. Therefore, a method of equipment degradation trend prediction based on GM (1, l)-Markov chain was proposed in this paper. The method which makes use of the advantages of both GM (1, 1) method and Markov chain could improve the prediction precision of nuclear equipment degradation trend. The paper collected degradation data as samples and accurately predicted the degradation trend of canned motor pump. Compared with the prediction results by GM (1, 1) method, the prediction precision by GM (1, l)-Markov chain is more accurate. (authors)

  6. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  7. General approach for accurate resonance analysis in transformer windings

    NARCIS (Netherlands)

    Popov, M.

    2018-01-01

    In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical

  8. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  9. The accurate definition of metabolic volumes on {sup 18}F-FDG-PET before treatment allows the response to chemoradiotherapy to be predicted in the case of oesophagus cancers; La definition precise des volumes metaboliques sur TEP au 18F-FDG avant traitement permet la prediction de la reponse a la chimioradiotherapie dans les cancers de l'oesophage

    Energy Technology Data Exchange (ETDEWEB)

    Hatt, M.; Cheze-Le Rest, C.; Visvikis, D. [Inserm U650, Brest (France); Pradier, O. [Radiotherapie, CHRU Morvan, Brest (France)

    2011-10-15

    This study aims at assessing the possibility of prediction of the response of locally advanced oesophagus cancers, even before the beginning of treatment, by using metabolic volume measurements performed on {sup 18}F-FDG PET images made before the treatment. Medical files of 50 patients have been analyzed. According to the observed responses, and to metabolic volume and Total Lesion Glycosis (TLG) values, it appears that the images allow the extraction of parameters, such as the TLG, which are criteria for the prediction of the therapeutic response. Short communication

  10. The role of self-reported impulsivity and reward sensitivity versus neurocognitive measures of disinhibition and decision making in the prediction of relapse in pathological gamblers

    OpenAIRE

    Goudriaan, A.E.; Oosterlaan, J.; de Beurs, E.; van den Brink, W.

    2008-01-01

    Background: Disinhibition and decision-making skills play an important role in theories on the cause and outcome of addictive behaviors such as substance use disorders and pathological gambling. In recent studies, both disinhibition and disadvantageous decision-making strategies, as measured by neurocognitive tests, have been found to influence the course of substance use disorders. Research on factors affecting relapse in pathological gambling is scarce. Method: This study investigated the e...

  11. Can brain responses to movie trailers predict success?

    NARCIS (Netherlands)

    M.A.S. Boksem (Maarten)

    2015-01-01

    textabstractDecades of research have shown that much of our mental processing occurs at the subconscious level, including the decisions we make as consumers. These subconscious processes explain why we so often fail to accurately predict our own future choices. Often what we think we want has

  12. Moderation of Stimulus Material on the Prediction of IQ with Infants' Performance in the Visual Expectation Paradigm: Do Greebles Make the Task More Challenging?

    Science.gov (United States)

    Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vöhringer, Isabel A.; Suhrke, Janina; Poloczek, Sonja; Freitag, Claudia; Lamm, Bettina; Teiser, Johanna; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun

    2015-01-01

    The objective of this study was to examine the role of the stimulus material for the prediction of later IQ by early learning measures in the Visual Expectation Paradigm (VExP). The VExP was assessed at 9?months using two types of stimuli, Greebles and human faces. Greebles were assumed to be associated with a higher load on working memory in…

  13. Requirements for accurately diagnosing chronic partial upper urinary tract obstruction in children with hydronephrosis

    International Nuclear Information System (INIS)

    Koff, Stephen A.

    2008-01-01

    Successful management of hydronephrosis in the newborn requires early accurate diagnosis to identify or exclude ureteropelvic junction obstruction. However, the presence of hydronephrosis does not define obstruction and displays unique behavior in the newborn. The hydronephrotic kidney usually has nearly normal differential renal function at birth, has not been subjected to progressive dilation and except for pelvocaliectasis does not often show signs of high-grade obstruction. Furthermore, severe hydronephrosis resolves spontaneously in more than 65% of newborns with differential renal function stable or improving. The diagnosis of obstruction in newborn hydronephrosis is challenging because the currently available diagnostic tests, ultrasonography and diuretic renography have demonstrated inaccuracy in diagnosing obstruction and predicting which hydronephrotic kidney will undergo deterioration if untreated. Accurate diagnosis of obstruction is possible but it requires an understanding of the uniqueness of both the pathophysiology of obstruction and the biology of the kidney and renal collecting system in this age group. We examine here the requirements for making an accurate diagnosis of obstruction in the young child with hydronephrosis. (orig.)

  14. Requirements for accurately diagnosing chronic partial upper urinary tract obstruction in children with hydronephrosis

    Energy Technology Data Exchange (ETDEWEB)

    Koff, Stephen A. [Ohio State University College of Medicine, Section of Pediatric Urology, Columbus Children' s Hospital, Columbus, OH (United States)

    2008-01-15

    Successful management of hydronephrosis in the newborn requires early accurate diagnosis to identify or exclude ureteropelvic junction obstruction. However, the presence of hydronephrosis does not define obstruction and displays unique behavior in the newborn. The hydronephrotic kidney usually has nearly normal differential renal function at birth, has not been subjected to progressive dilation and except for pelvocaliectasis does not often show signs of high-grade obstruction. Furthermore, severe hydronephrosis resolves spontaneously in more than 65% of newborns with differential renal function stable or improving. The diagnosis of obstruction in newborn hydronephrosis is challenging because the currently available diagnostic tests, ultrasonography and diuretic renography have demonstrated inaccuracy in diagnosing obstruction and predicting which hydronephrotic kidney will undergo deterioration if untreated. Accurate diagnosis of obstruction is possible but it requires an understanding of the uniqueness of both the pathophysiology of obstruction and the biology of the kidney and renal collecting system in this age group. We examine here the requirements for making an accurate diagnosis of obstruction in the young child with hydronephrosis. (orig.)

  15. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  16. Development of a Clinical Forecasting Model to Predict Comorbid Depression Among Diabetes Patients and an Application in Depression Screening Policy Making.

    Science.gov (United States)

    Jin, Haomiao; Wu, Shinyi; Di Capua, Paul

    2015-09-03

    Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting model. We trained and validated 4 machine learning models by using data from 2 safety-net clinical trials; we chose the one with the best overall predictive ability as the ultimate model. We compared model-based policy with alternative policies, including mass screening and partial screening, on the basis of depression history or diabetes severity. Logistic regression had the best overall predictive ability of the 4 models evaluated and was chosen as the ultimate forecasting model. Compared with mass screening, the model-based policy can save approximately 50% to 60% of provider resources and time but will miss identifying about 30% of patients with depression. Partial-screening policy based on depression history alone found only a low rate of depression. Two other heuristic-based partial screening policies identified depression at rates similar to those of the model-based policy but cost more in resources and time. The depression prediction model developed in this study has compelling predictive ability. By adopting the model-based depression screening policy, health care providers can use their resources and time better and increase their efficiency in managing their patients with depression.

  17. Time and activity sequence prediction of business process instances

    DEFF Research Database (Denmark)

    Polato, Mirko; Sperduti, Alessandro; Burattin, Andrea

    2018-01-01

    The ability to know in advance the trend of running process instances, with respect to different features, such as the expected completion time, would allow business managers to timely counteract to undesired situations, in order to prevent losses. Therefore, the ability to accurately predict...... future features of running business process instances would be a very helpful aid when managing processes, especially under service level agreement constraints. However, making such accurate forecasts is not easy: many factors may influence the predicted features. Many approaches have been proposed...

  18. Computer-based personality judgments are more accurate than those made by humans

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  19. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  20. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    Science.gov (United States)

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. What makes a life event traumatic for a child? The predictive values of DSM-Criteria A1 and A2

    Directory of Open Access Journals (Sweden)

    Eva Verlinden

    2013-08-01

    Full Text Available Background: The Diagnostic and Statistical Manual of Mental Disorders (DSM-Criteria A1 and A2 for posttraumatic stress disorder (PTSD have been discussed extensively, with several studies in adults or adolescents supporting the removal of Criterion A2. However, solid research in children is missing. Objective: This study evaluated the DSM-Criteria A1 and A2 in predicting posttraumatic stress in children. Method: A sample of 588 Dutch school children, aged 8–18 years, completed a self-report questionnaire to determine if they met Criteria A1 and/or A2. Their posttraumatic stress response was assessed using the Children's Revised Impact of Event Scale. Results: The contribution of Criterion A2 to the prediction of posttraumatic stress in children is of greater importance than the contribution of Criterion A1. Children who met Criterion A2 reported significantly higher levels of posttraumatic stress and were nine times more likely to develop probable PTSD than children who did not meet Criterion A2. When Criterion A1 was met, a child was only two times more likely to develop probable PTSD as compared with those where Criterion A1 was not met. Furthermore, the low sensitivity of Criterion A1 suggests that children may regularly develop severe posttraumatic stress in the absence of Criterion A1. The remarkably high negative predictive value of Criterion A2 indicates that if a child does not have a subjective reaction during an event that it is unlikely that he or she will develop PTSD. Conclusions: In contrast to most adult studies, the findings of this study emphasize the significant contribution of Criterion A2 to the prediction of posttraumatic stress in children and raise fundamental questions about the value of the current Criterion A1.For the abstract or full text in other languages, please seeSupplementary files under Article Tools online

  2. Development of a Clinical Forecasting Model to Predict Comorbid Depression Among Diabetes Patients and an Application in Depression Screening Policy Making

    OpenAIRE

    Jin, Haomiao; Wu, Shinyi; Di Capua, Paul

    2015-01-01

    Introduction Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting...

  3. On the Predictability of Hub Height Winds

    DEFF Research Database (Denmark)

    Draxl, Caroline

    Wind energy is a major source of power in over 70 countries across the world, and the worldwide share of wind energy in electricity consumption is growing. The introduction of signicant amounts of wind energy into power systems makes accurate wind forecasting a crucial element of modern electrical...... grids. These systems require forecasts with temporal scales of tens of minutes to a few days in advance at wind farm locations. Traditionally these forecasts predict the wind at turbine hub heights; this information is then converted by transmission system operators and energy companies into predictions...... of power output at wind farms. Since the power available in the wind is proportional to the wind speed cubed, even small wind forecast errors result in large power prediction errors. Accurate wind forecasts are worth billions of dollars annually; forecast improvements will result in reduced costs...

  4. Policy-Led Comparative Environmental Risk Assessment of Genetically Modified Crops: Testing for Increased Risk Rather Than Profiling Phenotypes Leads to Predictable and Transparent Decision-Making

    Directory of Open Access Journals (Sweden)

    Alan Raybould

    2018-04-01

    Full Text Available We describe two contrasting methods of comparative environmental risk assessment for genetically modified (GM crops. Both are science-based, in the sense that they use science to help make decisions, but they differ in the relationship between science and policy. Policy-led comparative risk assessment begins by defining what would be regarded as unacceptable changes when the use a particular GM crop replaces an accepted use of another crop. Hypotheses that these changes will not occur are tested using existing or new data, and corroboration or falsification of the hypotheses is used to inform decision-making. Science-led comparative risk assessment, on the other hand, tends to test null hypotheses of no difference between a GM crop and a comparator. The variables that are compared may have little or no relevance to any previously stated policy objective and hence decision-making tends to be ad hoc in response to possibly spurious statistical significance. We argue that policy-led comparative risk assessment is the far more effective method. With this in mind, we caution that phenotypic profiling of GM crops, particularly with omics methods, is potentially detrimental to risk assessment.

  5. Can crop-climate models be accurate and precise? A case study for wheat production in Denmark

    DEFF Research Database (Denmark)

    Montesino San Martin, Manuel; Olesen, Jørgen E.; Porter, John Roy

    2015-01-01

    Crop models, used to make projections of climate change impacts, differ greatly in structural detail. Complexity of model structure has generic effects on uncertainty and error propagation in climate change impact assessments. We applied Bayesian calibration to three distinctly different empirical....... Yields predicted by the mechanistic model were generally more accurate than the empirical models for extrapolated conditions. This trend does not hold for all extrapolations; mechanistic and empirical models responded differently due to their sensitivities to distinct weather features. However, higher...... suitable for generic model ensembles for near-term agricultural impact assessments of climate change....

  6. Using fire-weather forecasts and local weather observations in predicting burning index for individual fire-danger stations.

    Science.gov (United States)

    Owen P. Cramer

    1958-01-01

    Any agency engaged in forest-fire control needs accurate weather forecasts and systematic procedures for making the best use of predicted and reported weather information. This study explores the practicability of using several tabular and graphical aids for converting area forecasts and local observations of relative humidity and wind speed into predicted values for...

  7. Quantitative predictions from competition theory with incomplete information on model parameters tested against experiments across diverse taxa

    OpenAIRE

    Fort, Hugo

    2017-01-01

    We derive an analytical approximation for making quantitative predictions for ecological communities as a function of the mean intensity of the inter-specific competition and the species richness. This method, with only a fraction of the model parameters (carrying capacities and competition coefficients), is able to predict accurately empirical measurements covering a wide variety of taxa (algae, plants, protozoa).

  8. Making and Changing Wills

    Directory of Open Access Journals (Sweden)

    Cheryl Tilse

    2016-02-01

    Full Text Available Wills are important social, economic, and legal documents. Yet little is known about current will making practices and intentions. A comprehensive national database on the prevalence of will making in Australia was developed to identify who is or is not most likely to draw up a will and triggers for making and changing wills. A national survey of 2,405 adults aged above 18 years was administered by telephone in August and September 2012. Fifty-nine percent of the Australian adult population has a valid will, and the likelihood of will making increases with age and estate value. Efforts to get organized, especially in combination with life stage and asset changes trigger will making; procrastination, rather than a strong resistance, appears to explain not making a will. Understanding will making is timely in the context of predicted significant intergenerational transfers of wealth, changing demographics, and a renewed emphasis on retirement planning.

  9. Decision Making

    Directory of Open Access Journals (Sweden)

    Pier Luigi Baldi

    2006-06-01

    Full Text Available This article points out some conditions which significantly exert an influence upon decision and compares decision making and problem solving as interconnected processes. Some strategies of decision making are also examined.

  10. Α4β2 and α7 nicotinic acetylcholine receptor binding predicts choice preference in two cost benefit decision-making tasks.

    Science.gov (United States)

    Mendez, I A; Damborsky, J C; Winzer-Serhan, U H; Bizon, J L; Setlow, B

    2013-01-29

    Nicotinic receptors have been linked to a wide range of cognitive and behavioral functions, but surprisingly little is known about their involvement in cost benefit decision making. The goal of these experiments was to determine how nicotinic acetylcholine receptor (nAChR) expression is related to two forms of cost benefit decision making. Male Long Evans rats were tested in probability- and delay-discounting tasks, which required discrete trial choices between a small reward and a large reward associated with varying probabilities of omission and varying delays to reward delivery, respectively. Following testing, radioligand binding to α4β2 and α7 nAChR subtypes in brain regions implicated in cost benefit decision making was examined. Significant linear relationships were observed between choice of the large delayed reward in the delay discounting task and α4β2 receptor binding in both the dorsal and ventral hippocampus. Additionally, trends were found suggesting that choice of the large costly reward in both discounting tasks was inversely related to α4β2 receptor binding in the medial prefrontal cortex and nucleus accumbens shell. Similar trends suggested that choice of the large delayed reward in the delay discounting task was inversely related to α4β2 receptor binding in the orbitofrontal cortex, nucleus accumbens core, and basolateral amygdala, as well as to α7 receptor binding in the basolateral amygdala. These data suggest that nAChRs (particularly α4β2) play both unique and common roles in decisions that require consideration of different types of reward costs. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms

    Science.gov (United States)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-04-01

    CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.

  12. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  13. Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics

    OpenAIRE

    小野, 貴継; 井上, 弘士; 村上, 和彰

    2007-01-01

    This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...

  14. Predictive modelling: parents’ decision making to use online child health information to increase their understanding and/or diagnose or treat their child’s health

    Directory of Open Access Journals (Sweden)

    Walsh Anne M

    2012-12-01

    Full Text Available Abstract Background The quantum increases in home Internet access and available online health information with limited control over information quality highlight the necessity of exploring decision making processes in accessing and using online information, specifically in relation to children who do not make their health decisions. The aim of this study was to understand the processes explaining parents’ decisions to use online health information for child health care. Methods Parents (N = 391 completed an initial questionnaire assessing the theory of planned behaviour constructs of attitude, subjective norm, and perceived behavioural control, as well as perceived risk, group norm, and additional demographic factors. Two months later, 187 parents completed a follow-up questionnaire assessing their decisions to use online information for their child’s health care, specifically to 1 diagnose and/or treat their child’s suspected medical condition/illness and 2 increase understanding about a diagnosis or treatment recommended by a health professional. Results Hierarchical multiple regression showed that, for both behaviours, attitude, subjective norm, perceived behavioural control, (less perceived risk, group norm, and (non medical background were the significant predictors of intention. For parents’ use of online child health information, for both behaviours, intention was the sole significant predictor of behaviour. The findings explain 77% of the variance in parents’ intention to treat/diagnose a child health problem and 74% of the variance in their intentions to increase their understanding about child health concerns. Conclusions Understanding parents’ socio-cognitive processes that guide their use of online information for child health care is important given the increase in Internet usage and the sometimes-questionable quality of health information provided online. Findings highlight parents’ thirst for information; there is an

  15. Predicting the variation in Echinogammarus marinus at its southernmost limits under global warming scenarios: can the sex-ratio make a difference?

    Science.gov (United States)

    Guerra, Alexandra; Leite, Nuno; Marques, João Carlos; Ford, Alex T; Martins, Irene

    2014-01-01

    Understanding the environmental parameters that constrain the distribution of a species at its latitudinal extremes is critical for predicting how ecosystems react to climate change. Our first aim was to predict the variation in the amphipod populations of Echinogammarus marinus from the southernmost limit of its distribution under global warming scenarios. Our second aim was to test whether sex-ratio fluctuations - a mechanism frequently displayed by amphipods - respond to the variations in populations under altered climate conditions. To achieve these aims, scenarios were run with a validated model of E. marinus populations. Simulations were divided into: phase I - simulation of the effect of climate change on amphipod populations, and phase II - simulation of the effect of climate change on populations with male and female proportions. In both phases, temperature (T), salinity (S) and temperature and salinity (T-S) were tested. Results showed that E. marinus populations are highly sensitive to increases in temperature (>2 °C), which has adverse effects on amphipod recruitment and growth. Results from the climate change scenarios coupled with the sex-ratio fluctuations depended largely on the degree of female bias within population. Temperature increase of 2 °C had less impact on female-biased populations, particularly when conjugated with increases in salinity. Male-biased populations were highly sensitive to any variation in temperature and/or salinity; these populations exhibited a long-term decline in density. Simulations in which temperature increased more than 4 °C led to a continuous decline in the E. marinus population. According to this work, E. marinus populations at their southernmost limit are vulnerable to global warming. We anticipate that in Europe, temperature increases of 2 °C will incite a withdrawal of the population of 5°N from the amphipod species located at southernmost geographical borders. This effect is discussed in relation to the

  16. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  17. An analysis from the Quality Outcomes Database, Part 1. Disability, quality of life, and pain outcomes following lumbar spine surgery: predicting likely individual patient outcomes for shared decision-making.

    Science.gov (United States)

    McGirt, Matthew J; Bydon, Mohamad; Archer, Kristin R; Devin, Clinton J; Chotai, Silky; Parker, Scott L; Nian, Hui; Harrell, Frank E; Speroff, Theodore; Dittus, Robert S; Philips, Sharon E; Shaffrey, Christopher I; Foley, Kevin T; Asher, Anthony L

    2017-10-01

    OBJECTIVE Quality and outcomes registry platforms lie at the center of many emerging evidence-driven reform models. Specifically, clinical registry data are progressively informing health care decision-making. In this analysis, the authors used data from a national prospective outcomes registry (the Quality Outcomes Database) to develop a predictive model for 12-month postoperative pain, disability, and quality of life (QOL) in patients undergoing elective lumbar spine surgery. METHODS Included in this analysis were 7618 patients who had completed 12 months of follow-up. The authors prospectively assessed baseline and 12-month patient-reported outcomes (PROs) via telephone interviews. The PROs assessed were those ascertained using the Oswestry Disability Index (ODI), EQ-5D, and numeric rating scale (NRS) for back pain (BP) and leg pain (LP). Variables analyzed for the predictive model included age, gender, body mass index, race, education level, history of prior surgery, smoking status, comorbid conditions, American Society of Anesthesiologists (ASA) score, symptom duration, indication for surgery, number of levels surgically treated, history of fusion surgery, surgical approach, receipt of workers' compensation, liability insurance, insurance status, and ambulatory ability. To create a predictive model, each 12-month PRO was treated as an ordinal dependent variable and a separate proportional-odds ordinal logistic regression model was fitted for each PRO. RESULTS There was a significant improvement in all PROs (p disability, QOL, and pain outcomes following lumbar spine surgery were employment status, baseline NRS-BP scores, psychological distress, baseline ODI scores, level of education, workers' compensation status, symptom duration, race, baseline NRS-LP scores, ASA score, age, predominant symptom, smoking status, and insurance status. The prediction discrimination of the 4 separate novel predictive models was good, with a c-index of 0.69 for ODI, 0.69 for EQ-5

  18. Adaptive vehicle motion estimation and prediction

    Science.gov (United States)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  19. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  20. What makes a successful volunteer Expert Patients Programme tutor? Factors predicting satisfaction, productivity and intention to continue tutoring of a new public health workforce in the United Kingdom.

    Science.gov (United States)

    Macdonald, Wendy; Kontopantelis, Evangelos; Bower, Peter; Kennedy, Anne; Rogers, Anne; Reeves, David

    2009-04-01

    Better management of chronic conditions is a challenge for public health policy. The Expert Patients Programme was introduced into the United Kingdom to improve self-care in people with long-term conditions. To deliver self-care courses, the programme relies on the recruitment and continued commitment to delivering the courses of volunteer lay tutors who have long-term conditions. Ensuring the tutor workforce is productive, satisfied in their role and retained long-term is central to the viability of the programme. This exploratory study aimed to determine what factors predict productivity, intention to continue tutoring, and satisfaction in a sample of volunteer tutors from the Expert Patients Programme. A cross-sectional survey of 895 tutors was carried out and 518 (58%) responded. The questionnaire was designed to describe the characteristics, productivity, intention to continue tutoring, and satisfaction of tutors. Multiple linear regression analyses were used to examine the determinants of productivity, intention to continue tutoring, and satisfaction, such as patient demographics, attitudes, physical and mental health, mastery and self-esteem. Attitudes relating to personal goals, and better health were significant predictors of satisfaction with the tutor role. Only a small proportion of the variance in productivity was accounted for, and tutors were more likely to be productive when they were single, homeowners, car owners, and had lower scores on the depression scale. Overall satisfaction and personal goals were predictors of intention to continue tutoring. Demographic factors, health measures and attitudes each predicted different aspects of the experience of work conducted by the volunteer tutors. The results should prove useful for planning interventions to enhance the success of this new workforce initiative. Attempts to increase participation in courses by people from deprived backgrounds are likely to be enhanced if tutors come from similar

  1. Model : making

    OpenAIRE

    Bottle, Neil

    2013-01-01

    The Model : making exhibition was curated by Brian Kennedy in collaboration with Allies & Morrison in September 2013. For the London Design Festival, the Model : making exhibition looked at the increased use of new technologies by both craft-makers and architectural model makers. In both practices traditional ways of making by hand are increasingly being combined with the latest technologies of digital imaging, laser cutting, CNC machining and 3D printing. This exhibition focussed on ...

  2. Making the most of sparse clinical data by using a predictive-model-based analysis, illustrated with a stavudine pharmacokinetic study.

    Science.gov (United States)

    Zhang, L; Price, R; Aweeka, F; Bellibas, S E; Sheiner, L B

    2001-02-01

    A small-scale clinical investigation was done to quantify the penetration of stavudine (D4T) into cerebrospinal fluid (CSF). A model-based analysis estimates the steady-state ratio of AUCs of CSF and plasma concentrations (R(AUC)) to be 0.270, and the mean residence time of drug in the CSF to be 7.04 h. The analysis illustrates the advantages of a causal (scientific, predictive) model-based approach to analysis over a noncausal (empirical, descriptive) approach when the data, as here, demonstrate certain problematic features commonly encountered in clinical data, namely (i) few subjects, (ii) sparse sampling, (iii) repeated measures, (iv) imbalance, and (v) individual design variation. These features generally require special attention in data analysis. The causal-model-based analysis deals with features (i) and (ii), both of which reduce efficiency, by combining data from different studies and adding subject-matter prior information. It deals with features (iii)--(v), all of which prevent 'averaging' individual data points directly, first, by adjusting in the model for interindividual data differences due to design differences, secondly, by explicitly differentiating between interpatient, interoccasion, and measurement error variation, and lastly, by defining a scientifically meaningful estimand (R(AUC)) that is independent of design.

  3. Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.

    Science.gov (United States)

    Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu

    2015-10-01

    Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.

  4. Depression, realism, and the overconfidence effect: are the sadder wiser when predicting future actions and events?

    Science.gov (United States)

    Dunning, D; Story, A L

    1991-10-01

    Do depressed individuals make more realistic judgments than their nondepressed peers in real world settings? Depressed and nondepressed Ss in 2 studies were asked to make predictions about future actions and outcomes that might occur in their personal academic and social worlds. Both groups of Ss displayed overconfidence, that is, they overestimated the likelihood that their predictions would prove to be accurate. Of key importance, depressed Ss were less accurate in their predictions, and thus more overconfident, than their nondepressed counterparts. These differences arose because depressed Ss (a) were more likely to predict the occurrence of low base-rate events and (b) were less likely to be correct when they made optimistic predictions (i.e., stated that positive events would occur or that aversive outcomes would not). Discussion focuses on implications of these findings for the depressive realism hypothesis.

  5. Steel making

    CERN Document Server

    Chakrabarti, A K

    2014-01-01

    "Steel Making" is designed to give students a strong grounding in the theory and state-of-the-art practice of production of steels. This book is primarily focused to meet the needs of undergraduate metallurgical students and candidates for associate membership examinations of professional bodies (AMIIM, AMIE). Besides, for all engineering professionals working in steel plants who need to understand the basic principles of steel making, the text provides a sound introduction to the subject.Beginning with a brief introduction to the historical perspective and current status of steel making together with the reasons for obsolescence of Bessemer converter and open hearth processes, the book moves on to: elaborate the physiochemical principles involved in steel making; explain the operational principles and practices of the modern processes of primary steel making (LD converter, Q-BOP process, and electric furnace process); provide a summary of the developments in secondary refining of steels; discuss principles a...

  6. Equipment upgrade - Accurate positioning of ion chambers

    International Nuclear Information System (INIS)

    Doane, Harry J.; Nelson, George W.

    1990-01-01

    Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described

  7. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  8. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Directory of Open Access Journals (Sweden)

    Kevin R Ramkissoon

    Full Text Available The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  9. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    Science.gov (United States)

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  10. Make Sense?

    DEFF Research Database (Denmark)

    Gyrd-Jones, Richard; Törmälä, Minna

    Purpose: An important part of how we sense a brand is how we make sense of a brand. Sense-making is naturally strongly connected to how we cognize about the brand. But sense-making is concerned with multiple forms of knowledge that arise from our interpretation of the brand-related stimuli......: Declarative, episodic, procedural and sensory. Knowledge is given meaning through mental association (Keller, 1993) and / or symbolic interaction (Blumer, 1969). These meanings are centrally related to individuals’ sense of identity or “identity needs” (Wallpach & Woodside, 2009). The way individuals make...... sense of brands is related to who people think they are in their context and this shapes what they enact and how they interpret the brand (Currie & Brown, 2003; Weick, Sutcliffe, & Obstfeld, 2005; Weick, 1993). Our subject of interest in this paper is how stakeholders interpret and ascribe meaning...

  11. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  12. Decision Making in Action

    Science.gov (United States)

    Orasanu, Judith; Statler, Irving C. (Technical Monitor)

    1994-01-01

    The importance of decision-making to safety in complex, dynamic environments like mission control centers and offshore installations has been well established. NASA-ARC has a program of research dedicated to fostering safe and effective decision-making in the manned spaceflight environment. Because access to spaceflight is limited, environments with similar characteristics, including aviation and nuclear power plants, serve as analogs from which space-relevant data can be gathered and theories developed. Analyses of aviation accidents cite crew judgement and decision making as causes or contributing factors in over half of all accidents. A similar observation has been made in nuclear power plants. Yet laboratory research on decision making has not proven especially helpful in improving the quality of decisions in these kinds of environments. One reason is that the traditional, analytic decision models are inappropriate to multidimensional, high-risk environments, and do not accurately describe what expert human decision makers do when they make decisions that have consequences. A new model of dynamic, naturalistic decision making is offered that may prove useful for improving decision making in complex, isolated, confined and high-risk environments. Based on analyses of crew performance in full-mission simulators and accident reports, features that define effective decision strategies in abnormal or emergency situations have been identified. These include accurate situation assessment (including time and risk assessment), appreciation of the complexity of the problem, sensitivity to constraints on the decision, timeliness of the response, and use of adequate information. More effective crews also manage their workload to provide themselves with time and resources to make good decisions. In brief, good decisions are appropriate to the demands of the situation. Effective crew decision making and overall performance are mediated by crew communication. Communication

  13. Predicting RNA Structure Using Mutual Information

    DEFF Research Database (Denmark)

    Freyhult, E.; Moulton, V.; Gardner, P. P.

    2005-01-01

    , to display and predict conserved RNA secondary structure (including pseudoknots) from an alignment. Results: We show that MIfold can be used to predict simple pseudoknots, and that the performance can be adjusted to make it either more sensitive or more selective. We also demonstrate that the overall...... package. Conclusion: MIfold provides a useful supplementary tool to programs such as RNA Structure Logo, RNAalifold and COVE, and should be useful for automatically generating structural predictions for databases such as Rfam. Availability: MIfold is freely available from http......Background: With the ever-increasing number of sequenced RNAs and the establishment of new RNA databases, such as the Comparative RNA Web Site and Rfam, there is a growing need for accurately and automatically predicting RNA structures from multiple alignments. Since RNA secondary structure...

  14. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  15. Prediction of Quadcopter State through Multi-Microphone Side-Channel Fusion

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Garg, Kashish; Kim, Munsung; Li, Jonathan; Volk, Anja; Franchetti, Franz

    Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more tasks become autonomous. We present a multi-microphone machine learning fusion approach to accurately predict complex states of a quadcopter drone in flight from the sound it makes using audio content

  16. Prediction of sand production onset in petroleum reservoirs using a reliable classification approach

    Directory of Open Access Journals (Sweden)

    Farhad Gharagheizi

    2017-06-01

    It is shown that the developed model can accurately predict the sand production in a real field. The results of this study indicates that implementation of LSSVM modeling can effectively help completion designers to make an on time sand control plan with least deterioration of production.

  17. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...

  18. Neuroeconomics: cross-currents in research on decision-making.

    Science.gov (United States)

    Sanfey, Alan G; Loewenstein, George; McClure, Samuel M; Cohen, Jonathan D

    2006-03-01

    Despite substantial advances, the question of how we make decisions and judgments continues to pose important challenges for scientific research. Historically, different disciplines have approached this problem using different techniques and assumptions, with few unifying efforts made. However, the field of neuroeconomics has recently emerged as an inter-disciplinary effort to bridge this gap. Research in neuroscience and psychology has begun to investigate neural bases of decision predictability and value, central parameters in the economic theory of expected utility. Economics, in turn, is being increasingly influenced by a multiple-systems approach to decision-making, a perspective strongly rooted in psychology and neuroscience. The integration of these disparate theoretical approaches and methodologies offers exciting potential for the construction of more accurate models of decision-making.

  19. Decision making.

    Science.gov (United States)

    Chambers, David W

    2011-01-01

    A decision is a commitment of resources under conditions of risk in expectation of the best future outcome. The smart decision is always the strategy with the best overall expected value-the best combination of facts and values. Some of the special circumstances involved in decision making are discussed, including decisions where there are multiple goals, those where more than one person is involved in making the decision, using trigger points, framing decisions correctly, commitments to lost causes, and expert decision makers. A complex example of deciding about removal of asymptomatic third molars, with and without an EBD search, is discussed.

  20. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Science.gov (United States)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  1. Responsive Decision-Making

    DEFF Research Database (Denmark)

    Pedersen, Carsten Lund; Andersen, Torben Juul

    , the aim of this study is to gain deeper insights into the complex and multifaceted decision processes that take place in large complex organizations operating in dynamic high-velocity markets. It is proposed that the ability to obtain faster, more accurate and updated insights about ongoing environmental......Strategic decision making remains a focal point in the strategy field, but despite decades of rich conceptual and empirical research we still seem distant from a level of understanding that can guide corporate practices effectively under turbulent and unpredictable environmental conditions. Hence...

  2. Making Connections

    Science.gov (United States)

    Pien, Cheng Lu; Dongsheng, Zhao

    2011-01-01

    Effective teaching includes enabling learners to make connections within mathematics. It is easy to accord with this statement, but how often is it a reality in the mathematics classroom? This article describes an approach in "connecting equivalent" fractions and whole number operations. The authors illustrate how a teacher can combine a common…

  3. More accurate picture of human body organs

    International Nuclear Information System (INIS)

    Kolar, J.

    1985-01-01

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  4. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  5. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  6. Accurate activity recognition in a home setting

    NARCIS (Netherlands)

    van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.

    2008-01-01

    A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its

  7. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  8. Exploring Cognitive Relations Between Prediction in Language and Music.

    Science.gov (United States)

    Patel, Aniruddh D; Morgan, Emily

    2017-03-01

    The online processing of both music and language involves making predictions about upcoming material, but the relationship between prediction in these two domains is not well understood. Electrophysiological methods for studying individual differences in prediction in language processing have opened the door to new questions. Specifically, we ask whether individuals with musical training predict upcoming linguistic material more strongly and/or more accurately than non-musicians. We propose two reasons why prediction in these two domains might be linked: (a) Musicians may have greater verbal short-term/working memory; (b) music may specifically reward predictions based on hierarchical structure. We provide suggestions as to how to expand upon recent work on individual differences in language processing to test these hypotheses. Copyright © 2016 Cognitive Science Society, Inc.

  9. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    Science.gov (United States)

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  10. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    Directory of Open Access Journals (Sweden)

    Chang Wang

    Full Text Available The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR spectroscopy with partial least squares (PLS analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  11. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  12. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  13. Emotion and decision making.

    Science.gov (United States)

    Lerner, Jennifer S; Li, Ye; Valdesolo, Piercarlo; Kassam, Karim S

    2015-01-03

    A revolution in the science of emotion has emerged in recent decades, with the potential to create a paradigm shift in decision theories. The research reveals that emotions constitute potent, pervasive, predictable, sometimes harmful and sometimes beneficial drivers of decision making. Across different domains, important regularities appear in the mechanisms through which emotions influence judgments and choices. We organize and analyze what has been learned from the past 35 years of work on emotion and decision making. In so doing, we propose the emotion-imbued choice model, which accounts for inputs from traditional rational choice theory and from newer emotion research, synthesizing scientific models.

  14. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  15. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    Science.gov (United States)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  16. KFM: a homemade yet accurate and dependable fallout meter

    International Nuclear Information System (INIS)

    Kearny, C.H.; Barnes, P.R.; Chester, C.V.; Cortner, M.W.

    1978-01-01

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  17. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S J [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G; Joensen, A [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  18. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  19. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  20. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  1. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  2. Accurate multiplicity scaling in isotopically conjugate reactions

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs

  3. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    Science.gov (United States)

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  4. Achieving Target Voriconazole Concentrations More Accurately in Children and Adolescents

    OpenAIRE

    Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patien...

  5. Making Yugoslavs

    DEFF Research Database (Denmark)

    Nielsen, Christian Axboe

    . By the time Aleksandar was killed by an assassin’s bullet five years later, he not only had failed to create a unified Yugoslav nation but his dictatorship had also contributed to an increase in interethnic tensions.   In Making Yugoslavs, Christian Axboe Nielsen uses extensive archival research to explain...... the failure of the dictatorship’s program of forced nationalization. Focusing on how ordinary Yugoslavs responded to Aleksandar’s nationalization project, the book illuminates an often-ignored era of Yugoslav history whose lessons remain relevant not just for the study of Balkan history but for many...

  6. THE MAKING OF DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Leonardo Yuji Tamura

    2016-04-01

    Full Text Available Quantum Electronics was a Brazilian startup in the 1990's that was acquired by an American equity fund in 2012. They are currently the largest manufacturer of vehicle tracking and infotainment systems. The company was founded by three college friends, who are currently executives at the company: Camilo Santos, Pedro Barbosa and Luana Correa. Edward Hutter was sent by the equity fund to take over the company’s finances, but is having trouble making organizational decisions with his colleagues. As a consultant, I was called to help them improve their decision making process and project prioritization. I adapted and deployed our firm's methodology, but, in the end, its adequacy is shown to be very much in question. The author of this case study intends to explore how actual organizational decisions rely on different decision models and their assumptions, .as well as demonstrate that a decision model is neither absolutely good nor bad as its quality is context dependent.

  7. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    Science.gov (United States)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  8. Predicting Baseline for Analysis of Electricity Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T. [Ulsan National Inst. of Science and Technology (Korea, Republic of); Lee, D. [Ulsan National Inst. of Science and Technology (Korea, Republic of); Choi, J. [Ulsan National Inst. of Science and Technology (Korea, Republic of); Spurlock, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Todd, A. [Lawrence Berkeley National Lab. (LBNL), Berk